Recommendations for SAS/SATA RAID 5 controller

Burn23

Gawd
Joined
May 24, 2000
Messages
914
Looking for some recommendations for SAS/SATA RAID controller.

The short of it:
My buddy just replaced his 256GB SSD's with 512GB's and is giving me a killer deal on six Plextor drives!

I have this Thermaltake 6-bay enclosure and plan on filling it.

Motherboard is an ASRock Fatal1ty Z77 Professional-M. This particular board has UEFI, and I am not sure if this will cause an issue with booting to a RAID card.

Power supply is a 550W Corsair modular power supply which has been sufficient for my current machine. I can't imagine needing any more to power these Plextors. In fact, they draw less power than my current Samsungs. I also plan on getting a newer, lower power GPU soon as well (and another 16GB of RAM), so... anyway..

One of the controllers I'm considering:
ServeRAID M1015 SAS/SATA Controller


Any other recommendations for comparable RAID cards out there that may meet my needs?


My current machine specs are in my signature. Primary usage of this machine is my Plex server, Video encoding/decoding/transcoding and editing, and primary rig. Essentially, it's always on and always serving up some content and transferring data. All sqeeeeeeezed into this case.

Thanks!
 
Last edited:
I'd suggest sticking with RAID0, and backing up to a 2TB disk.

I don't think you will get anywhere near the speeds you want with R5 on any hardware in your current system. There are a few crazy ideas with zfs maybe, but a lot more work.

There aren't really any RAID cards that can keep up with the speeds of SSD's, other than a fastpath LSI 9240 maybe.
 
Bump for any other suggestions for a good RAID controller for 6xSSD?

Ideally, on the cheap (~$200'ish or less on ebay).
 
I do not believe a good raid controller for 6 SSDs exists in this price range even on eBay. Although I have seen some for $300 or so used. Get an LSI MegaRaid + fastpath key.

So, perhaps, something like LSI Megaraid 9260-8i. That is what my buddy is using that was formerly controlling these guys.

Ah! Just saw your link... Thanks
 
Last edited:
So, perhaps, something like LSI Megaraid 9260-8i. That is what my buddy is using that was formerly controlling these guys.

Ah! Just saw your link... Thanks

Thats one of the best cards for SSD RAID arrays, but you must have the fastpath keys (which I assuming "all software options enabled" covers)
 
Thats one of the best cards for SSD RAID arrays, but you must have the fastpath keys (which I assuming "all software options enabled" covers)

Holy shit the keys are expensive! So, the controller and the fastpath key are ranging around $500'ish total. Well, that's what I get for buying a bunch of SSD's. At least I got those super-cheap.
 
You know most of the Z87 chipset motherboards will support Raid 5 for sata III ports? It may not be as great as a dedicated raid card, but it might end up a cheaper path to replace the board & chip, particularly since you can recoup some of the cost by selling the old gear off.

My current system is running 4 of the Corsair SSD's in an R5, and its pretty decent to me.
 
This will work however 4K performance will most likely be worse than a single SSD. Which translates to most application loading and OS reads and writes will be slower than just having a single SSD but still a lot faster than a hard drive raid.
 
I'd suggest sticking with RAID0, and backing up to a 2TB disk.

I don't think you will get anywhere near the speeds you want with R5 on any hardware in your current system. There are a few crazy ideas with zfs maybe, but a lot more work.

There aren't really any RAID cards that can keep up with the speeds of SSD's, other than a fastpath LSI 9240 maybe.

I agree. Just do software raid 0 in windows. Works great
 
I'd say Run 3 pairs of 2 SSD's in RAID0, plugged into the motherboard using RAID. Have three drive letters or use some sort of spanning.

You're not really going to notice a speed difference beyond 2, unless you have a very specific reason. (uncomrpessed 4k video editing or something)

I've tried SSD RAID with a 9240 and it really didn't make a difference.
 
Consider that the DMI interface only has a bandwidth of 2 GB/s. So simultaneous sequential access to more than 3 drives on the mainboard ports have diminishing returns.
 
An interesting article here talks about the pitfalls of SSD RAID.

Some of the things it mentions to maximize performance:

1. Avoid any RAID configuration that involves parity calculation (RAID-5 & 6). Stick with RAID-0 or RAID-10

2. Avoid RAID controllers as they are designed primarily for spinning disks. Use the OS to create your RAID volume to take advantage of TRIM.

3. Use only ~50% of each drive's capacity to really boost performance.

4. Distribute RAID members across multiple volumes (the Z77 controller and the ASM1061 controller) to avoid bottlenecks.

A chart:

4n9t6Fy.png
 
Last edited:
Upon further reading:

You don't want to use the onboard SATA ports, for exactly the reasons why omniscence mentioned. Anything on the ICH10 segment will all have to pass through the 2GB/s bridge, throttling performance.

So yeah, a PCI-e 2.0 add-on RAID card is necessary. But I'd still run them in JBOD on the card to let the OS handle RAID activity.

djG1Xo7.png
 
Consider that the DMI interface only has a bandwidth of 2 GB/s. So simultaneous sequential access to more than 3 drives on the mainboard ports have diminishing returns.

Upon further reading:

You don't want to use the onboard SATA ports, for exactly the reasons why omniscence mentioned. Anything on the ICH10 segment will all have to pass through the 2GB/s bridge, throttling performance.

So yeah, a PCI-e 2.0 add-on RAID card is necessary. But I'd still run them in JBOD on the card to let the OS handle RAID activity.

http://i.imgur.com/djG1Xo7.png

DMI 2.0 on P67 and newer is 20Gb/s not 2Gb/s as the X58 pictured. OP is using a Z77 board.

With that said though, I'd still go LSI or break them into three sets of two.
 
1. Avoid any RAID configuration that involves parity calculation (RAID-5 & 6). Stick with RAID-0 or RAID-10
Correct
2. Avoid RAID controllers as they are designed primarily for spinning disks. Use the OS to create your RAID volume to take advantage of TRIM.
Very hard for most to discern, better said to aim for newer controllers or those that do support TRIM.
3. Use only ~50% of each drive's capacity to really boost performance.
Old-wives-tale, keep the drives under 90% and all good. Some may suggest using smaller partition and leave 5-10% of total capacity RAW.
4. Distribute RAID members across multiple volumes (the Z77 controller and the ASM1061 controller) to avoid bottlenecks.
Very messy and a backwards step. You will see higher latency by spanning different controllers along with higher risk of something going south fast.


That being said, My suggestion would be the use of a M1015 in RAID-0 and not stress about TRIM. The array is not the boot/OS volume, format the bugger every few months and enjoy. Keep a full backup on a local 3-4TB drive and set scheduled backup.
Keep OS on single SSD on SATA-3 port and if worried about it, add a second and use windows to create a dynamic mirror. end result is duplicated data and up to twice the read speeds and single drive writes.
 
TRIM isn't that bog of a deal in normal usage. While having garbage cleanup at the OS level is convenient since the knows when the system is idle, all modern drives (in the last 4 years or so) have internal garbage cleanup that is OS agnostic and works slowly. The downside is that it doesn't know when the system is idle, so it does it more conservatively.

But regardless, all SSD's will "TRIM" themselves over time.
 
Something I recall reading before, but can't find the source offhand, is that the M1015 and similar cards can't handle 5+ disk raid 0's very well. Something to do with the transfer across SFF ports, as software RAID 0-ing two sets of SSD arrays got the expected performance. Maybe someone can chime in and confirm one way or another.
 
DMI 2.0 on P67 and newer is 20Gb/s not 2Gb/s as the X58 pictured. OP is using a Z77 board.

Neither agrikk nor I said anything about 2Gbit/s. In both posts you can read 2GB/s, which is short for GBytes/s. And either 2 GB/s (which is the raw data rate) or 20Gb/s (which is the signalling rate, not the data rate) is too low for 4 fast SSDs. And you won't even get 2GB/s due to PCIe overhead and sharing this bandwidth with onboard ethernet and USB controllers.
 
Old-wives-tale, keep the drives under 90% and all good. Some may suggest using smaller partition and leave 5-10% of total capacity RAW.

If this is an old wives tale, what is the explanation for the clear performance differences shown in the "Parity-less SSD RAID" chart above?

To me this chart shows a clear performance difference between drives with 7% raw unallocated space vs 70% free space. The author mentioned that this gain was due to the drive being free to allocate writes to free spaces without having to wait for the garbage collection process to free up which makes sense.

But regardless, all SSD's will "TRIM" themselves over time.

Yeah, but the trim process adds additional overhead to the drive operation, right? So wouldn't this have an I/O impact on the high performance array we are building here?
 
But regardless, all SSD's will "TRIM" themselves over time.
No, they won't. TRIM is there to increase the amount of pages/blocks garbage collection can work with. Once the whole accessible block range has been written to once, GC has only the internal overprovisioned space to work with. Since garbage collection is a slow process, a lot of drives show a severe reduction in write performance once they are full. A lot of reviews of the famous 840 Pro also show this effect. Depending on the type of SSD it is advisable to overprovision them up to 30% in RAID configurations.

Since TRIM is a somewhat asynchronous action, it can cause huge unpredictable delays that are deadly for hardware RAID. This is one of the reasons RAID vendors did not bother to implement TRIM.

Regarding controllers, SAS2008 based controllers (9211-8i, 9240-8i, M1015) are a bit too slow for larger SSD RAIDs. It is also said that SAS2208 based controllers (9266-8i, 9271-8i) without FastPath are already faster with SSDs than SAS2108 based controllers (9260-8i) with FastPath. So for maximum performance with large SSD RAIDs a 9271-8i with FastPath and a BBU is the best option.
 
Last edited:
Back
Top