recommended raid controller?

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,715
I had gotten an intel raid controller which was basically a 9265cv-8e. Included BBU. Performance was nice - with 6-drive raid6 I was getting north of 500MB/sec (sas nearline drives). Until I tried to add a drive to the array. It started rebuilding and took about a day. That I could live with. What got it RMA'ed was that it slowed down so badly as to be unusable (vsphere hosts timing out, guests dying, etc...) I did a test on the linux storage appliance - got about 200KB/sec. Anyway... I would like to keep the option open of switch to ZFS at some point, which would require JBOD/single-disk mode. All the controllers I've looked at support that. What they don't all support is the ability to apply write-back cache mode to JBOD/single-disks without having to create N 1-disk virtual disks. The areca 1882 supports that. The LSI unit I was using didn't (or at least I couldn't see any way to.) It looks like the perc h810 does (and has cachecache for free.) I have not been able to find any helpful reviews of the H810. Any advice based on 1st hand experience would be much appreciated...
 
Almost forgot: the motivation for single-disk/JBOD write-back cache is that I could run ZFS datasets/zvols with sync=standard (or always) and not need an uber-high performance SSD as SLOG.
 
do you mind me asking what you are planning on using this for? its just that it seems like you are going to spend an arm and a leg to get something and only use a fraction of it at any one time... is it ISCS or what all connection? i see you mention VM's. is there NFS and CIFS, or what all else?
 
This is for a storage server currently serving up VMFS datastores to a dozen or so VMs via iSCSI. Not sure what you're definition of 'an arm and a leg' is, but I've seen units well under $1000. I can get cheaper if I don't get something that can provide BBU or flash backup for write-back cache. Until you've seen your writes go into the toilet due to write-through (or no) cache, you might think that's a luxury... Should have mentioned that the storage server is connected to vsphere hosts using 10gbe, so having higher latency due to a sh*tty controller would make the investment in 10gbe pointless...
 
true but there are alternative ways to gain performance. and a single 6 drive raid6 or raidz2 (not the same in performance but the same in redundancy) is not probably the best option for that. and rebuild on that controller is pretty sapping of its ability to handle things.

but first i would pick a path, hardware RAID and stick with it, and buy a GOOD controller. OR ZFS,BRTFS or some other software based platform and head entierly down that path.

even an H200 with multiple VDEVs (say a ZPOOL of mirrors) or something, could get way more performance than you could need and the option to throw a SSD CACHE drive on too if you ever actually needed it would be there. even if the controller had no cache at all.

adding a ton of onboard CACHE to a controller to make a single array perform faster (hardware or ZFS) is not necessarily my choice. especially when i could go buy 5 or more H200 or H310 cards for the cost of 1 H810.
 
I wasn't advocating raid6 - just pointing out that I tried that as the (supposed) best performing configuration with that intel controller. You're making all kinds of assumptions about not only my technical knowledge, but my situation. 5 raid controllers? In what slots? No one was talking about a 'ton of onboard cache' - most of the decent controllers have 1GB or so of ram cache. I was asking for direct first hand experience to recommend (or un-recommend) specific controllers, not for a tutorial in storage management. If you have nothing to contribute in that respect, please don't bother responding.
 
for hardware i like the adaptec 7 series (or 8 series if budget allows) the cards with a Q have some advanced CACHE features that would be right up your alley. AVOID the adaptec cards with an E in the name, they are muts and not useful for most enterprise solutions.

i do not like any implementation of an LSI hardware raid controller, i do like most LSI based HBA cards if i am doing something in software (ZFS or i just need a ton of sata/sas ports)

ive very little areca experience.
 
Should be using Raid 10 not 6 for anything active and I/O intense and yes during a raid 6 rebuild, performance is crap, again resaon to go raid 10 instead and raid 6 is by far one of the least performance based raid for any controller, along with raid 5.
 
Yes, I know. I am running raid10 now (and always have.) As I said already, I tried out raid6 to test the mfg assertion about raid6 being the best. In point of fact, they were right (for the same number of drives.) e.g. I tested 3x2 raid10 and 6-drive raid6. The latter was significantly better for read and write. I assume this is due to more spindles participating in the I/O operations? The write performance was fine, presumably due to write-back cache and HW XOR acceleration? But like I said, I/O performance being degraded is one thing. Having it slow down by over 3 orders of magnitude is not. That can't possibly be normal. Think about it: a drive dies, and before you replace it, you have to pull all the data off the LUN or be willing to have it unavailable for a day or so? If you have to do that, you're better off just rebuilding the array from scratch with a new drive. It wasn't just the I/O performance (I should have said this earlier): even just doing a megacli command to query the controller was taking like 30 seconds to respond when it was almost instantly before.
 
Big fan of my Areca cards, especially now that I have the new one running right (my fault).
 
2nd for Areca for sure, i had several at my old job for servers i built and they always performed.
 
thanks guys, i appreciate the advice. i just need to figure out the best bang for the buck via ebay now :)
 
Back
Top