Thinking about SW raid, what do you think?

Saggy

Weaksauce
Joined
May 26, 2008
Messages
91
Finally after the help of Danny Bui and Nitrobass24, I finalized the server build except for a raid card.
Doing my research on raid cards, it got me thinking about software raid. I don't have any sw raid experiences but from what I have been reading, it seems to be a good idea. I used to think software raid is crazy, but from what I read, SW raid is a viable option. No proprietary lock ins with raid cards and if the hardware ever dies, I don't have to worry about locating an exact raid card that might not be in production anymore. Since this is a home file server, serving very few computers, likely not at the same time, the resource usage on SW raid shouldn't be a problem(I think). It will be an core i3 build(good enough?)

So, depending on the raid choice, it will change the motherboard choice(pick one with sas controller instead for sw raid).

What do you recommend? This will be a raid 6 implementation up to 24 drives, prob multiple arrays though. I am thinking about linux as the OS regardless of which type of raid.
 
I'm curious as well to see what people think. I'm fighting with the idea of a nice Linux implementation using Raid 60, OpenSolaris/FreeBSD using RaidZ3 or Windows using FlexRaid. But on the other hand, I love the idea of having a good quality card running the array ... Though I'd like to add a question to this really, would it be harmful having a software raid running on a good quality system using 4gb ram and a decent quad core? Wouldnt it be able to technically be faster at rebuilds if needed? Hopefully this post will be the deciding factor in how my media server will go
 
running a linux softraid setup for more then a year now, have seen several disk failures due to bad connectors / kernel issues (at least in the beginning) with my previous motherboard
Never lost a single bit, rebuilds always worked out fine (but I sometimes had to intervene by hand, but again that was more in the earlier days, in general all tools involving softraid are pretty well worked out by now)
running raid 5 and now a couple of months raid 6 (in the past over a bunch of 4 port cards and some motherboard controller)
 
Software RAID is great like MD RAID in Linux.

Most good NAS use software RAID including Synology and QNAP.

Performance is not an issue. My Synology DS1010+ only has a 1.6Ghz Intel atom CPU. I'm running a 5x2TB RAID5 array and I get 100+MB/s sustained reads and writes.
 
What do you recommend? This will be a raid 6 implementation up to 24 drives, prob multiple arrays though. I am thinking about linux as the OS regardless of which type of raid.
You pretty much got it right: Multiple RAID 6 arrays with Linux software RAID. No problem with that at all.
Though I'd like to add a question to this really, would it be harmful having a software raid running on a good quality system using 4gb ram and a decent quad core? Wouldnt it be able to technically be faster at rebuilds if needed? Hopefully this post will be the deciding factor in how my media server will go

No. Kind of.
 
You pretty much got it right: Multiple RAID 6 arrays with Linux software RAID. No problem with that at all.

I have done this at work for years without issue and I mean 6 to 8 years. I have about 10 to 14 arrays of 6 to 10 drives.
 
I used to assist with the Linux RAID stack in the kernel years ago. I have used it in high performance, high disk count environments with no issues. It's a great piece of software and up until now, my prefered way of creating large volumes.

Recently I have been working with RAID-Z and ZFS under Solaris. I have to say, I'm really impressed! The ease with which you can expand arrays, modify layouts etc, brilliant! If you're interested in more info, take a look at sub.mesa's ZFS FAQ: http://hardforum.com/showthread.php?t=1500505

And FYI: probably the most important feature of ZFS (which is lacking in Linux Software RAID) is the block level checksumming feature. It renders things like bit rot and corruption completely moot.


A word of caution about hardware RAID: Go for a true hardware raid solution from a reputable vendor (eg: LSI). If your raid controller blows up on you, generally speaking you'll need an identical controller to get the array back up and running (it DOES happen, more often that we'd like to admit). This is one area that software RAID shines. If your system board dies, you can always transplant the disks onto different hardware and rescue your data. Open standards/software make your data more transportable.
 
I've used mdadm for about 3 years now. Gone from 6x500GB drives in RAID5 to now 7x2TB drives in RAID6 + Hot spares. I have expanded about 6 times, changed chunksize twice, and recovered from a bad disk once. I like the feeling of control I have versus a hardware RAID card (I have used these too) and really like the fact that I can just boot from a live CD and assemble and recover the data.

I've recently gone from a Opteron 165 to a Xeon X3430 and speeds are maybe a little faster, but the main thing is that SMBD doesn't bog down the machine anymore. I haven't really tuned it but I can get speeds of over 300MB/s in read and write and since it's a network machine primarily for me, that's good enough.
 
Just note that you would want a HBA controller instead of a RAID controller, if you choose non-Windows software RAID. Software RAID on passthrough disks on a RAID controller would still require TLER disks, in my experience. This may not apply to all RAID controllers, but choosing something like LSI 1068E HBA would be recommended for a software RAID setup.

Intel SASUC8i is a popular choice, you need to buy the cables separately.
 
I was always told as a kid that no question is a dumb question ... So I have to ask, it says on the intel site that it can support 122 drives in SAS Mode. I wanted to be sure that the card wouldnt much care if it were SATA or SAS (I recall reading somewhere that it shouldnt)?

Also, would you have any personal recommendations on a software raid setup ... be it zfs, flexraid, linux softraid or any others you can think of? And do you know if there is a substantial performance difference in a hardware raid 60 setup using something like an Areca 1680ix with 24 drives or a softraid passing through to a raid 60? I'm just curious is all, since I've been finding that I really wouldnt mind spending the 1000+ on a nice areca setup, but i cant find all the benefits to it vs softraid.

Thanks,
--pyr0
 
I hate linux software RAID. When I used it, was nothing but headaches and issues... drives randomly drop out of the array, would not force rebuild. But when this happened I just mounted the individual drive and it was fine, so I blame mdamd for being crap. Drives are fine as I am using them to this day in two different hardware RAID controllers.

Look around, you can get a decent hardware RAID controller for dirt cheap. Currently I am running an HP E200 controller which I picked up for $45 on eBay, previously Adaptec 2610SA, also under $100.
 
My experience with mdadm has been totally the opposite of that. It has always been rock stable or at least the last 8 years. And for me and the 15+ arrays (100+ disks in a work environment) I have had with it over that time I have had nothing but good to say about it.

Adaptec 2610SA, also under $100.

Yuck. Adaptec used to be great with their SCSI cards but that was 10 years ago. Their current SATA / SAS stuff I consider of lower quality.
 
Last edited:
A word of caution about hardware RAID: Go for a true hardware raid solution from a reputable vendor (eg: LSI).

Good advice.

If your raid controller blows up on you, generally speaking you'll need an identical controller to get the array back up and running (it DOES happen, more often that we'd like to admit).

That is only somewhat true. Areca, Adaptec, and LSI can all see arrays get transplanted at this point so saying "exact" same card is very misleading. It is more like card from the same vendor that has the same or newer firmware, or is a newer generation card. So you are vendor locked with RAID arrays on HW RAID, but not exact card locked in newer generation cards. 8 years ago, this was not the case though, so if you are using like Ultra SCSI cards then you will probably have issues using new cards... first and foremost the physical interface though.

And ditto on controllers dying. It happens, especially with off-brand controllers.

This is one area that software RAID shines. If your system board dies, you can always transplant the disks onto different hardware and rescue your data. Open standards/software make your data more transportable.

So here you are locked on software. I.e. you cannot stick the drives into a Windows box, running windows, and see a rebuild of a ZFS array. Of course you could take the Windows box down, connect drives, and boot from different media. Generally portability is a good thing though.
 
I hate linux software RAID. When I used it, was nothing but headaches and issues... drives randomly drop out of the array, would not force rebuild. But when this happened I just mounted the individual drive and it was fine, so I blame mdamd for being crap. Drives are fine as I am using them to this day in two different hardware RAID controllers.

Look around, you can get a decent hardware RAID controller for dirt cheap. Currently I am running an HP E200 controller which I picked up for $45 on eBay, previously Adaptec 2610SA, also under $100.

I have had no such issues, except when my RAM was bad.
You have had to have either bad drives or some other hardware/driver issues to get that kind of behaviour. mdadm is rock stable on supported hardware (which is most). But keep away from those partially supported (in Linux) mainboard controllers (for example those new S-ATA 3 mainboard addon controllers)
 
So here you are locked on software. I.e. you cannot stick the drives into a Windows box, running windows, and see a rebuild of a ZFS array. Of course you could take the Windows box down, connect drives, and boot from different media. Generally portability is a good thing though.

Haha fair point. I've never really looked at it this way. I guess no matter which way you go, you'll experience some kind of lock in (eg: ZFS = solaris/BSD, Windows RAID). Having said that, I'd much rather be locked in at the OS level rather than the controller level, as this will usually be the case anyway; file systems tend to be OS dependant regardless.
 
So here you are locked on software. I.e. you cannot stick the drives into a Windows box, running windows, and see a rebuild of a ZFS array. Of course you could take the Windows box down, connect drives, and boot from different media. Generally portability is a good thing though.
You can stick it into any bsd or any solaris box as long as the zfs version is high enough
zfs import makes sure things will work
Not every solution has to involve Windows
 
I have used Linux software RAID on and off for years. Its rock solid and I've never lost data. I currently am using 2 8x2tb RAID 6 arrays with good performance. In both cases, I'm using Linux KVM to passthrough the RAID to a Windows VM. Works great.
 
Last edited:
Good advice.
So here you are locked on software. I.e. you cannot stick the drives into a Windows box, running windows, and see a rebuild of a ZFS array. Of course you could take the Windows box down, connect drives, and boot from different media. Generally portability is a good thing though.
But any system can run any software. So if you're on Windows, you can use Virtualbox to access the ZFS data on your locally attached disks. You would need to create physical passthrough disks to let ZFS inside the VM to access the raw disks, though, but it would work.

Connecting ZFS disks to a Windows machine could be dangerous though, since Windows prompts you to 'initialize' the drive if it didn't found a valid partition table. Confirming that screen would corrupt your harddrive, essentially the same as a quick format.

But generally, software allows you to do anything, while hardware stays limited to something you can touch. Software - by nature - is more flexible than hardware.

About hardware RAID requiring the exact same card in case of failure: if you just want to recover the contents; just boot Linux it can read other RAID arrays just fine, including virtually all Windows onboard RAID (aka "fakeRAID"). Not sure if all hardware RAID metadata formats are supported, but worth a try and you can always do it manually too. So you don't need to replace with identical hardware if you don't want to.
 
Back
Top