SAS speeds

ashman

Gawd
Joined
Mar 28, 2011
Messages
811
I am reusing an old server for a client, its a Dell T610 and it has a Perc 6i in it, we've purchased new drives, 600GB SAS and they are 6GB/s however the Perc only supports 3GB/s. The server is going to be used for ESXi, am I better off spending money on say a Perc H700 controller that supports 6GB/s or does it really matter? I want the best performance possible, but if the difference is negligible then I don't want to waste the money either.
 
I'd imagine it depends largely on what you're virtualizing and how much use it will get. A high traffic web server that's serving lots of large files (streaming or something?) might see a benefit. A machine that's just a license server running a bunch of codemeter instances or something probably won't.

I think I'd set the system up as-is, do a little testing, and upgrade the controller only if the PERC 6 seems to be a bottleneck.
 
Maybe I should give some more detail, the idea with this server is to use it for Veeam replication and if the need arises, spin up a replicated VM should the primary fail.I am even considering adding enough memory (64) so that it could run all the replicated VM's if the primary host failed but that wasn't my original goal and I didn't buy enough hard disks to match the original server.I realize in a failover scenario its ok to have some performance degradation, not ideal, but this is not an enterprise customer with hundreds of employees but a small office with three locations and 75 users. The VM's are Exchange, terminal server, a couple of application servers and in the future likely a sybase database server.
 
Are the drives connected through an expander, and if so how many drives vs. how many links between the expander and the card? Think that's probably the only way you'll max out 3Gb with HDDs.
 
The drive are hot plug, so there is a backplane with a direct connection to the raid controller.
 
Are you positive the backplane itself supports 6Gbps SAS? It most likely is, but before you start buying other parts, you'll want to be 100% sure.

Also bear in mind that backplanes can be expanders. For example, a BP that only supports four drives with a direct connection to a single SAS port on your RAID controller is going to be a "direct" connection. However, a BP that supports six drives which is connected via the same single SAS port on the RAID controller means there's an expander involved.

I agree with RazorWind on replicating the server setup and testing. It's quite possible the real world difference you'll see between SAS3/SAS6 isn't going to be that huge.
 
What if the raid controller has two ports, each with its own cable connecting directly to the backplane, is there an expander involved in that scenario? This T619 is configured to support 8 drives only.
 
Your "two connector" PERC 6i RAID controller actually has eight native ports. The cable going between the RAID controller and BP carrier four native ports within that single cable. This is presuming that one end is a SFF-8087 connector; the other will be the same or a proprietary Dell connector. In which case, the BP can be thought of as a "pass through" device with each channel directly connecting a drive to a native port of the controller.

(To the hyper detailed here: I realize this isn't the best, or most exact, explanation; but it does the job)
 
So at the end of the day am I crazy to spend the extra money for a faster raid controller when it sounds like I won't even see an increase in performance?
 
Until you do real world testing between the two boxes in questions, it's all just conjecture / theoretical. My personal inclination is to say you won't see a huge difference. That said, I see some good deals on eBay for a H700. Provided your current cables are rated for 6Gbps, it wouldn't be terribly expensive to find out. Tell management it's to ensure as much parity between the servers as possible; I can't see them having too big of an issue over ~$100. And, as an added bonus, you'll have the old H600 as a backup "just in case".
 
Going from sas1 to sas2 won't give you any speed improvement, unless your using ssd or expanders in your system.

Since your not, the only gain you will get from a different raid card, would be expanded onboard memory cache, and a faster processor for raid5/6 operations.
 
Correct, a single 600G 10k or 15K SAS drive will NOT saturate a 3Gbps link so you wont see any performance difference.

Raid 10 will be good, as you get the speed of raid 0 and the redundancy of Raid 1. Little reason these days to use anything other than Raid 10 vs say Raid 6, if you want performance raid 10, if you want safer redundancy Raid 6 if you do more reads than writes.
 
Just out of curiosity what would saturate a 3Gbps link? Eight drives in RAID 10?I am only doing six 600GB 15K drives.
 
Only an SSD or expander will saturate it.

If you are using 15krpm disks, your not worried about saturating it at all, they will so something like 200-250MB/sec, so they can get close to saturating it. But if that was your goal, you would be using 7200 or slower rpm disks.

The point of 15krpm disks is to handle more iops, around 200-250iops vs 80iops of 7200rpm disk.

At 250iops, your saturating <10MB/sec.

When you design your system, you need to know what you need, max iops, or max bandwidth.
 
If you have LOTS of HDs and your controller is fast enough RAID 50 is worth a look as well.

I have a 24 900GB 10k drive array with 20 drives in RAID 50 with 4 Global Hot Spares (4 RAID 5's stripped)

Storage efficiency is higher than RAID 10, just slightly less reliable than RAID 6, about 3 times more reliable than a 17 drive RAID 5, with the same storage volume.

BUT....write IOPS performance is 200% higher than RAID 6, and rebuilds take just under an hour.

FYI...On the same controller if it was configured as one giant RAID 6 array a failed drive takes 19 hours to rebuild.

Critical Data is backed up offsite of course.
 
No one recommends anyone to use raid5 these days.

I would just dispose of two hotspares, and use raid60 over 22 disks, vs raid50 over 20.

You gain a lot more reliability, and have the same 200% performance boost.
 
Just out of curiosity what would saturate a 3Gbps link? Eight drives in RAID 10?I am only doing six 600GB 15K drives.

If there is no expander then 3Gbps is what each drive gets, so the number of drives doesn't matter. What you've got is 8*3=24Gbps for your RAID.

Now with expanders you can have drives "sharing" bandwidth and then going 6Gbps might make sense, but only with fast drives.
 
Back
Top