"RAID is not a backup" ..ok then, what is?

I agree with you 100%
But now we can use cheap discs due to ZFS, ya?
Assuming drives WILL fail, doesn't it make sense to appropriate cost per array instead of cost per drive and then maximize on the inexpensive part of RAID?
I already pay too much for SSD, I cannot justify cost of enterprise drive, what else am I not understanding?
I look forward to your reply

For home use, and a few multi-user environments (like mine) SATA desktop HDD and even PATA HDDs can easily handle the load.

SAS HDDs are more robust however and can handle a much higher load and stress without the likely hood of the drive or the controller failing.

SAS enterprise 15K rpm drives were never intended for home use, that's why a 300GB HDD is over $400, but you get what you pay for. 7200rpm drives are good, but they can't handle the intense load of small or large files simultaneously like 15K rpm drives can.

This is why 7200rpm drives are generally used for SANs and NASs, while 15K rpm drives are used in the servers themselves.
 
I was just ROTFL at the obvious contradiction in your post...its good enough for Google but not good enough for "real" data storage problems...right!

As for the backup drives - they are cheap and worth it to keep my data safe. At the scale I add video its cheaper than all but the most fly-by-night network backup services. Besides, I've been rotating a dozen or so of them for a while now, not buying new, so its even cheaper, $0 monthly marginal cost. No worries here.

Go ahead and use SATA desktop drives in a heavy server environment, see how long they last before the controller fails (like yours did) and before the drives buckle under the load. :rolleyes:

You know, if you invested in some tape drives, you wouldn't have to buy a new HDD every month. You say 7200rpm desktop drives are good enough, yet your array failed. But you said it was the *gasp* controller. Oh, you mean the SATA controller, not a SAS. :rolleyes:


I'm not trying to pick on you, but your logic doesn't work.

In server environments, 15K SAS drives are completely necessary.

Yes, I get google doesn't do it, but they don't exactly have a similar network or operation next to every other company on the planet.
 
Backup for important media and documents:-

- online. There are so many free online backup services, such as Dropbox, Mozy, Live Mesh, Skydrive, Google docs, Carbonite. Most also do sync. My essential docs and some photos easily fit in that size, if not I can always pay a little for more.
- external usb drives always carried with me.

All other data (movies, music etc)
- external hard drives. Sometimes backed up twice.

Likelihood of an external drive failing, while its just sitting in a closet and not powered on, is minuscule. Its certainly not going to degrade like optical media can and is much faster and easier to use.
 
SSDs obliterate any HDDs, yet they are SATA?

SAS is a much better interface, mainly because it is full duplex as mentioned. I think we will see a lot more SSD with SAS when the SSD technology has been proven enough to be adopted by enterprises where proven reliability is a must.
 
SAS is a much better interface, mainly because it is full duplex as mentioned. I think we will see a lot more SSD with SAS when the SSD technology has been proven enough to be adopted by enterprises where proven reliability is a must.

Wow, I didn't actually know that. Actually I researched further, there's a great comparison of SAS vs SATA at wikipedia: http://en.wikipedia.org/wiki/Serial_attached_SCSI#SAS_vs_SATA

Interesting stuff.
 
SSDs obliterate any HDDs, yet they are SATA?

HDDs may last for years, where as SSDs are not a proven media and can die within a month. Also, as said above, SAS is much more robust and is normally used in server environments, where as SATA is a lesser (and more cost effective) controller protocol that is often used in home/desktop/SOHO computers.

Remember, a normal HDD/SSD has a cheap SATA controller, but the more expensive enterprise-class drives have SAS, increasing the cost substantially due the SAS controller being on board.
 
This is why 7200rpm drives are generally used for SANs and NASs, while 15K rpm drives are used in the servers themselves.

Generally if you're not booting from the SAN, you put "just enough disk" on the machine to get the OS up. Since your SAN fabric can have whatever you need you just provision from there. SATA, SAS, SSD, etc. When you're not buying hardware raid controllers for each system, you get to put your money into the SAN and just buy HBAs to get to your fast disks.

I run 5 Oracle VMs on SATA LUNs and they've never had any issue. Iops demand on them isn't extreme but it keeps up very well.

The servers do tend to have 15K SAS. An M4000 only has 2 internal drives, oddly enough. We generally only run one domain on an M4000. The M5000s have four drives, I guess they plan on you running two domains.
 
RAID is a backup - it just protects against limited failure conditions. The more failure conditions you want to guard against, the more expensive and effort driven (potentially) the response is.

I see some ppl on this forum with mega redundant plans and yet only one person mentions testing the same backups :) MD5deep is good for ensuring your backup is exact, although most good backup programs have some sort of verification program.
 
Generally if you're not booting from the SAN, you put "just enough disk" on the machine to get the OS up. Since your SAN fabric can have whatever you need you just provision from there. SATA, SAS, SSD, etc. When you're not buying hardware raid controllers for each system, you get to put your money into the SAN and just buy HBAs to get to your fast disks.

I run 5 Oracle VMs on SATA LUNs and they've never had any issue. Iops demand on them isn't extreme but it keeps up very well.

The servers do tend to have 15K SAS. An M4000 only has 2 internal drives, oddly enough. We generally only run one domain on an M4000. The M5000s have four drives, I guess they plan on you running two domains.

Same here. At my job, our servers are using SAS 15K rpm drives, normally 2 in RAID1 or 4-5 in RAID5 with hotspares for each. Our SAN is using 7200rpm SATA drives in a RAID5 array, but the drives are enterprise-class drives and the workload is relatively small (some video streaming, small file transfers, nothing ever exceeding 10-15MB/s).

Our IT dept is actually using the SAN far more than everyone else is, so the bulk comes from us.

Yeah, fibre channel HBAs are great and run strong.
 
@SeanG: sorry to hear about your co. having people at 7WTC. crazy that next year will be 10 years.

Thanks.... but fortunately, that building was evacuated well before the south tower collapse. There were no casualties from 7WTC, although it was a disturbing sight to see since my office faced the WTC plaza. Here's what it looked like in August from my office window, taken by a coworker who just bought a camera on his way into work and was trying it out:

p8290001s.jpg


[On topic content]He took a bunch of shots around the WTC plaza and posted them up on a server to share with us. If it wasn't for those offsite backups, those photos would have been lost when the building went down. ;)
 
I finally got my backups going.

I have 4 internal drives on my desktop, and then I have 9 TB over 5 drives in external cases. I copy all data from the 4 internal to the external drives.

I feel that it would be a rare feat for the 2 drives with identical data to fail at the same time..
 
Our SAN is using 7200rpm SATA drives in a RAID5 array, but the drives are enterprise-class drives

Here's the thing. The "Enterprise Class" drives are otherwise identical to the desktop parts but cost more as a hedge against the 24x7 duty cycle.
 
Here's the thing. The "Enterprise Class" drives are otherwise identical to the desktop parts but cost more as a hedge against the 24x7 duty cycle.

Can you confirm this some how?
This is what I was told a long time ago, but then again I bought into the scsi bullshit too.
 
+1 I am interested as well for an details you may have regarding this..

So far I have found this about the Hitachi 2TB disks:


Hitachi 2TB Harddrive Owners Thread

New to the group here, and also new to these drives as I planned to go with WD blacks until I read about all the TLER stuff. I really like them, but I'm getting a little worried that maybe I've got a bad one or I picked the wrong drive. Long story short, I've got 3 of these on a 3ware 9550x-12 controller in a home NAS and they ran fine for about a month, then twice in two weeks, I had one drive drop out of the array. After restarting and/or checking connections, the drive shows back up on the controller and i rebuild the array successfully. Unfortunately, I didn't write down which one dropped last time. As far as I know I don't have any "power" settings that tell them to spin down or anything. I see links here in the thread for Hitachi's tools, but I'm not sure what I should check first. Could use some expertise on where some of you more experienced users start when troubleshooting issues like this. I'm willing to do the work, just not sure what direction to head in first.

joshnerl

--
 
I'd hate to buy 2/3TB "desktop" HDDs and find them drop off my 3ware raid controller.
 
Here's the thing. The "Enterprise Class" drives are otherwise identical to the desktop parts but cost more as a hedge against the 24x7 duty cycle.

My understanding was that, just like CPUs, when they run the test sample they separate batches that perform tightest to spec and brand those higher.
 
Back
Top