Onboard SATA III vs HBA card for 6Gb/s drives

Plecebo

n00b
Joined
Jun 29, 2011
Messages
9
Hello Everyone!

I wish I had found this forum a few days ago, lots of great information in here regarding bigger storage installs.

Here is the short version of my question:
I am building out a 16 drive file server mix of using either a combination of on board sata3 & LSI SAS2008 card OR one of the HP SAS Expanders. I'm worried that the HP card might be slower for 6Gb/s drives then the onboard sata III option.

Is that something I should be concerned about?

Here is the longer version:
I am in the process of upgrading my 8 x 1Tb (mostly WD Green SATA II Drives) file server into something a bit bigger since I'm running out of space. I currently manage things through Ubuntu server with mdadm etc etc. Pretty standard and besides the space issue it works ok.

I found the Noreco RPC-4220 which is a big step up as far as HD space goes, since I'm maxed out currently at 8.

I'm in the process of ordering 8 x 2Tb Hitatchi SATA III Drives , mixing between Amazon and Newegg over the span of 2 weeks or so to ensure I don't get one big batch of drives. Is this a large concern? Or should I just order them all at once?

I would like to use all 16 of my drives (8 new 8 old) in the new 4220 and I would like to take advantage of the speed boost of SATA III as well.

I found the excellent reference here: http://blog.zorinaq.com/?e=10 which lead me to beleieve two things.
  1. The Onboard SATA III connections give more throughput then a controller card because of the bandwidth between the northbridge vs the PCIe channels and the processor. Is this a serious concern? I'm thinking not so much, or else why would they have cards with the ability to handle many (24) drives if it would completely saturate the pcie lanes?
  2. The LSI SAS2008 is just about the best bet as far as compatability and performance go.
These two assumptions lead me to purchase these items:

  • 1x LSI SAS2008 card
  • A motherboard with 8x 6Gb/s sata connections and 3 x 16x PCIe 2.0 slots. The intent being to fill the 8 onboard sata ports with the newer Hitatchi 6G/s drives using SAS reverse breakout cables, and use the LSI card to control the older 3Gb/s drives. This is essentially a high powered gamer motherboard, which makes me nervous sticking in a server, I'd rather get a less flashy, more stable server type MB, but finding one with 8 sata 6Gb/s ports was not an option.

Future expansion would require more LSI SAS2008 cards in the other 16x PCIe slots.

Since reading this forum I see another option would be to pick up a one of the HP SAS expander cards seen on this thread: http://hardforum.com/showthread.php?t=1484614 for the same cost as two of the LSI's and be covered for 24 devices.

Is the bandwidth accross the PCI bus an issue if you were running 24 drives accross it in a raidz2 setup? (I know I am not in that sittuation yet, but want to future proof a bit).

This will mostly be a fileserver (i was looking into FreeNAS with raidz2, but further testing will need to be done to see if that setup will work for me) to share large media files (4Gb-50Gb files to an XBMC head). I do run a subsonic server streaming media server (which is transcoding music/movies for me) as well as some transcoding of my media files mostly handbrake/h264 transcoding, so I'd like to keep the processor/ram specs beefy if possible.

Any advise is appreciated, I've gotten most of the items on order, but wouldn't mind switching things around a bit if it will be better for me in the long run, I'm wondering if I should switch to the HP card, or stick with the LSI?

Thanks!
 
You won't gain anything from SATA-III with mechanical drives aside from slightly faster cache speeds to the mobo.

Where SATA-III really shines is with new-generation SSDs.

Getting a SAS expander isn't a bad idea for pass-through on Software RAID. You should be just fine with getting the HP expander with 24 ports. In all honesty, it's what I would do, rather than having a mix between mobo ports and another expander card. If anything, just for consistency within your Linux distro. I would switch to the HP personally, but either choice will net you good results.

I actually like your idea, and as far as the gaming-class mobo, it should be fine. If anything, it will be overkill for your needs and should be very stable since it is meant for high-heat output and usage.
 
Thanks for the feedback Red Falcon. That is the way I am leaning, but it is nice to bounce the ideas off other with more experience then I and get confirmation.

I think I'll buy the HP and run it all through the expander.

If anyone else has feedback, I'd be glad to hear it.
 
Yeah, it should be fine running through the expander as a pass-through node.

If there is a problem in the future, it will make things more simple to solve, instead of having some of the HDDs go through the mobo and some go through the expander.

Good luck and let us know how it turns out for you.
 
Placebo,

I am in a very similar situation so I would be happy to echange ideas, results.
May I ask what motherboard you found that has "8x 6Gb/s sata connections and 3 x 16x PCIe 2.0 slots" ?

best regards

Tibrebour
 
Hey Tibrebour,

Here are the parts I've got for the build so far:

Case: NORCO RPC-4220
Mobo: ASRock 890FX Deluxe4
Processor: AMD Phenom II X6 1090T Black Edition
Ram: G.SKILL Ripjaws X Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600
Power Supply: CORSAIR Enthusiast Series CMPSU-650TX 650W ATX12V / EPS12V
Boot Device: Patriot Supersonic 32GB USB 3.0 Flash Drive
SAS Card: HP 468406-B21 PCI Express Full-height Plug-in Card SAS 24-port SAS RAID Controller
Backplane to SAS Card: 5x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable - OEM
Backplane to Power: 1x NORCO C-P1T7 4 Pin Molex 1 Male to 7 Female Power Extension Splitter Cable - OEM

Here is my plan for drives under freenas:
(new) 8 x HITACHI Deskstar 5K3000 HDS5C3020ALA632 (0F12117) 2TB 32MB Cache SATA 6.0Gb/s 3.5" in a raidz2 configuration

(existing) 6x Western Digital Caviar Green WD10EADS 1TB 32MB Cache SATA 3.0Gb/s 3.5"
(existing) 2x Western Digital RE3 WD1002FBYS 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5"

My 8 existing drives & system are setup as an md raid 10 array and will remain that way until the new 8x2TB zraid2 array comes on line and has all the data. At which point I'll move the 8 existing drives to the freenas install and configure them as a separate raidz2 array and use it for backup.

I have parts trickling in from various vendors this week and will probably try to put it all together this upcoming weekend. I'm excited to see it all come together.

As i'm getting the drives I'm doing a badblocks and SMART scan on them, so far so good.
 
Placebo,

Thanks for very precise answer ! :)

Yes, currently the only way to get 8x 6Gb/s sata connections is to go with a AMD Board.
Your configuration look very safe.

Maybe one question: AM3+ motherboard accept AM3 processors, why not choosing a 890FX Deluxe5 or 990FX Extreme4 instead of ASRock 890FX Deluxe4 ?
Same price and both have 8x 6 Gb/S SATA ports and look great ?

Otherwise, I wonder what much speed you will get from the intern hard drives (connected to motherborad) vs the one connected to the SAS CArd ...
 
Last edited:
I've made some discoveries since my last post, thought I would share.

First of all (and of course it is in the first post if you read it) the HP SAS expander needs another controller card to function properly. Since I had already ordered the LSI SAS2008 I went ahead and used that. So my chain goes (or will once I get all the cables in)

HD (all 20 of the drive bays are hooked up this way) -> (via 5x SAS cables) HP SAS Expander -> (Via 2x SAS cables) LSI SAS2008

As far as the storage goes that is pretty much all there is to it. This seems to work great so far with both the LSI and HP cards plugged into the PCIe x16 slots.

As for the rest of the server, i've run into one item I'm still working on solving, and one "should have read the spec sheet a little closer".

So far I've been unsuccessful at booting via the USB 3.0 ports on the board so far the USB 2.0 ports have worked fine, but 3>2 so I'll be working on that.

Also, the board doesn't have a video output *doh* so while i'm configuring I've stolen my desktop card. Once I have the server setup and running the way I want I'll be removing the video card and using it that way.

Other then these minor issues things have been good. I'm still waiting for my last hard drive to show up (newegg can take forever to ship stuff) and then I'll be working on testing them all (badblocks & S.M.A.R.T.) then moving on to freenas installation and configuration.

For now here are some benchmarks:
/dev/sda is connected to the on board sata3 (6gb/s) ports
/dev/sde is connected via HP SAS expander -> LSI2008

drives formatted as ext2, unmounted
Code:
you@sol:~$ sudo hdparm -tT /dev/sda /dev/sde

/dev/sda:
 Timing cached reads:   6844 MB in  2.00 seconds = 3422.87 MB/sec
 Timing buffered disk reads: 394 MB in  3.00 seconds = 131.25 MB/sec

/dev/sde:
 Timing cached reads:   6828 MB in  2.00 seconds = 3414.84 MB/sec
 Timing buffered disk reads: 426 MB in  3.01 seconds = 141.75 MB/sec

Now mounted
Code:
you@sol:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdd1              22G   12G  8.6G  59% /
none                  3.9G  236K  3.9G   1% /dev
none                  4.0G     0  4.0G   0% /dev/shm
none                  4.0G   52K  4.0G   1% /var/run
none                  4.0G     0  4.0G   0% /var/lock
/dev/sda1             1.8T   68M  1.7T   1% /mnt/sda
/dev/sde1             1.8T   68M  1.7T   1% /mnt/sde

dd write
Code:
you@sol:~$ time sh -c "dd if=/dev/zero of=/mnt/sda/ddfile bs=8k count=3000000 && sync"
3000000+0 records in
3000000+0 records out
24576000000 bytes (25 GB) copied, 213.286 s, 115 MB/s

real    3m44.569s
user    0m1.090s
sys     0m40.660s
you@sol:~$ time sh -c "dd if=/dev/zero of=/mnt/sde/ddfile bs=8k count=3000000 && sync"
3000000+0 records in
3000000+0 records out
24576000000 bytes (25 GB) copied, 190.825 s, 129 MB/s

real    3m21.716s
user    0m1.000s
sys     0m44.560s

and finally write tests
Code:
you@sol:~$ time dd if=/mnt/sda/ddfile of=/dev/null bs=8k
3000000+0 records in
3000000+0 records out
24576000000 bytes (25 GB) copied, 191.086 s, 129 MB/s

real    3m11.101s
user    0m1.450s
sys     0m38.960s
you@sol:~$ time dd if=/mnt/sde/ddfile of=/dev/null bs=8k
3000000+0 records in
3000000+0 records out
24576000000 bytes (25 GB) copied, 167.245 s, 147 MB/s

real    2m47.260s
user    0m1.350s
sys     0m39.720s

Looks like these two are pretty close, as was suggested they would be in the comments.
for some reason I still have this inkling that running things through the on board sata ports would be faster. interesting that the drive hooked up to the HP expander had faster read times then the other (I actually did that test twice to verify).

Perhaps when I get my last drive I'll do a 4 drive raid0 comparison and see if we can saturate the I/o bandwidth to the card vs to the on board sata.

If there are other tests I should run let me know, I'll be leaving up this configuration at least until I get the last drive.

Cheers
 
...
for some reason I still have this inkling that running things through the on board sata ports would be faster. interesting that the drive hooked up to the HP expander had faster read times then the other (I actually did that test twice to verify).
NOTE Please also read edit/addendum, at end, before drawing any conclusions. Thank you.

The real lesson: All drives are not created equal.
Your sda drive is a "couch potato" (at least in the first 25GB). Try swapping the drives and re-run your read test. In future, if you want to do a comparative test of two controllers, use the same drive (or array). Yeah, it means more futzing around, but if you want useful test results ...

Note: For this type of testing, it is sufficient, and (I believe) preferable, to completely bypass the filesystem (and kernel buffers). Ie,
dd if=/dev/sda of=/dev/null bs=1M count=25000 iflag=direct

Edit/Addendum (after some post-posting reflection):
My initial conclusion about drive on sda being a "couch potato" (significant underperformer) is a strong possibility. It is not uncommon to find a 10-15% variance in performance among "equivalent" drives (same make/model/firmware). [Anyone setting up a striped array should endeavor to avoid such mismatched performance. You might as well not use current-generation drives.]

Still, it is also possibile [even, likely (that ~130MB/sec reading is a "familiar" evidence of guilt)] that the OP's mobo SATA ports (at least the one under test) are only running at sata1 spec (1.5Gbps), whether by driver deficiency or manufacturer deception.

Further rigorous, and controlled, (single-variable) testing is needed. Regardless of whether the mobo ports should be faster than an add-in card's, or not, they should definitely NOT be running at sata1 if they are documented to be sata3.

[/Edit/Addendum]

-- UhClem
 
Last edited:
This is very interesting !

Just a basic question before going further, ...
Which software do you use for
badblocks & S.M.A.R.T. ?
for this test ?

Tibrebour
 
Yikes I missed some of these posts I guess, sorry about that guys.

As an update on my end. I have things up and running under Ubuntu using md raid6 and the 8 new 2tb drives. Things have been going great so far, I got all my data moved over and I've been stress testing the server. Network speeds seem a bit low, so i'm looking into network related issues, but other then that things are good.

I tried FreeNAS 8.0 and really liked it. I didn't get to stress test the ZFS parts too much, but the configuration was SIMPLE. I ultimately decided to stick with ubuntu because I am really unfamiliar (read never used) with any of the BSD variants and while they intrigue me as top notch OS's they ultimately were not what I was familiar with. Simple things like installing/compiling software, and where config files are located were different enough for me to stick with the safe bet. I'm sure I could have figured it out given time, but I wanted to finish up the project, not learn a new OS, so I stayed with ubuntu. I will be looking for the kernel ZFS stuff to stabilize and consider it for my next upgrade in a few years.

@UhClem

I can see that my methodology is a bit flawed. I found it quite difficult to locate reliable information on similar hardware setups and methodology for reliable comparison testing.

For this build I decided to move forward passing all the drives through the HBA since I had already bought the parts (returning items can be such a pita) and it gives me more flexibility down the road. I still have to move over my 1TB drives to the new enclosure, so perhaps I will test with them in this configuration and report back though I'm leaning towards just not worrying about it and moving forward. I've been happy with the raid speeds, so why rock the boat?

@tibrebour
I'm using the linux tools badblocks (http://linux.die.net/man/8/badblocks) and smartctl (http://linux.die.net/man/8/smartctl) specifically the commands

WARNING: This will destroy data on your drive. If you have data on your drive look for one of the non destructive methods in the man page of badblocks.
Code:
sudo badblocks -svw /dev/sdx
I ran this simultaneously over all 8 drives (screen ftw). It took about 40 hours to complete give or take. Obviously it is destructive so you can't use your drives while this is going on, and you will have to start over with the drive (partition/format/assemble etc)

Code:
sudo smartctl -t long /dev/sdx
This is a completely behind the scenes test and you have to check back to see the results. I'm not 100% sure but I think the smart tests
 
Back
Top