WHS build questions

Saggy

Weaksauce
Joined
May 26, 2008
Messages
91
After a lot of research, I am leaning toward building a WHS. Here are the main parts I am unsure of.

Lots of server builds you find on google is fairly cheap, old parts, but I am thinking I want to have at least 10 to 15 drives; after researching, here is the deal:
I saw a post that someone says that the Norco RPC-4224 24 Bay case is actually not good because it does not come with expander, then you would need to buy a really expensive raid card that can support up to 15 to 24 drives in my case.
or
get a http://www.supermicro.com/products/chassis/4U/846/SC846E1-R900.cfm which costs like 1300 but with power supplies and expander. Then all I need is to get a 400 dollar raid card.

so total cost for norco is 400 case + 1400 raid card(i have a feeling i am wrong here) + 400 power supply = 2200
or
supermicro case + power supply + expander (1300) + 400 dollar 3WARE 9690SA-4I-SGL 4 Port = 1700

is this correct? is it over kill? how else can I make this 15 to 24 drive whs happen? I want the system drive to be in raid 1 though(as someone suggested in another thread). Other drives are not going to be in raid.

thank you
 
why do you want to use a RAID card and WHS?

ah, you're talking about VAIL??? ;)
 
Why do you need/want so many drives? You only want RAID on the boot drives - why would you buy a $1400 RAID card then? If you listed exactly what you are trying to accomplish and why, I'm sure the fine people here could help you better.
 
is this correct? is it over kill? how else can I make this 15 to 24 drive whs happen? I want the system drive to be in raid 1 though(as someone suggested in another thread). Other drives are not going to be in raid.

No dude, that's not correct at all. And yes that is complete and utter overkill.

This is the more common WHS or jsut large sized general file server setup:
$100 - Intel Core i3-530 CPU
$190 - Supermicro MBD-X8SIL-F-O Intel 3420mATX Motherboard
$79 - Kingston 2 x 2GB ECC Unbuffered DDR3 1333 RAM
$200 - 2 x SuperMicro AOC-SASLP-MV8 PCI-Ex4 8 Port SATA Controller Card
$30 - 2 x 3ware SFF-8087 to Multi-lane SATA Forward Break-out Cable
$80 - Antec Truepower New TP-750 750W PSU
$380 - NORCO RPC-4224 4U Rackmount Server Case
---
Total: $1059 plus tax and shipping

Well below your planned setup and you actually have most of the entire server in that budget, not just the case, controllers, and PSU.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Danny, thank you so much for the detailed reply, this is really helpful.

EDIT: The norco case uses Six internal SFF-8087 Mini SAS connectors. How do I connect that with the raid sata controller card you recommended? I guess I don't need the break out cables, instead I just buy 4 SFF-8087 Mini SAS connectors cables? (for the 2 cards) can the mobo support 3 of these cards?
http://www.newegg.com/Product/Product.aspx?Item=N82E16816133034 This cable??

I see that you recommended 2 sata controller, each one I assume has 2 ports that when used with 2 break out cables can attach 8 drives; I would then need 4 break out cables for the 2 cards? If in the future, I want to utilize all 24 drive bays, do I need to buy another one of these card plus 2 cables?
Do you have any recommendations for setup that can attach up to 24 drives? The reason is I already have quite a few drives in usb enclosures and in my old nas, together when I migrate over to the new server will already fill up quite a few bays, I imagine that I will use up all 24 bays soon.

Asposium, you give me too much credit, I thought i needed a raid card because I need to mirror the system drive, and I thought I needed an expensive raid card because I didn't how else to attach 21 more drives to the motherboard.

To Grimham, what I am trying to do is migrate my old nas(out of space) 4 drives plus many already filled usb drives to a one-server solution, so I can have my own little cloud at home. It will mainly be a file server. Since WHS can accomplish this with different size drives, that's why I am leaning toward WHS build. To ensure little down time in case of a drive failure, I will use raid 1 on system drive. So at least 1 raid card is needed that can create raid 1 (plus hot spare) for the system drive. that's 3 drive bays already. I will also need to have enough sata connection so I can attach the rest of the 21 drives(it will get there soon)

That's why when I researched it, I found people saying the norco can not accomplish this without a raid card that supports 24 drive, turns out it is wrong, since I don't really need raid on the rest of the drives(correct??). and they have also said that instead I could buy that 1300 supermicro case comes with expander and I only need to use one, 400 dollar raid card. I guess this setup is more for serious server needs in a server farm?

So in short, for my needs, what is the best possible build? basically what I don't know is, how can I attach 24 drives, 3 of them in raid 1 with hot spare to a build. I would like a robust hardware raid 1 for the system drives though, my experience is that motherboard raid is bad?

once again, thank you all for the helpful replies.
 
Last edited:
The Supermicro board mentioned has an on board SAS controller. Just get a HP SAS expander and plug it into the SAS controller. You won't need breakout cables, just one cable per expander port to a backplane on the Norco for 4 drives. The RAID 1 array can be done with the on board Intel controller. Use a reverse breakout cable for this. It will use 4 SATA ports and go to one of the Norco back planes.
 
How robust is the on board raid, i much rather use a dedicated raid card, the one danny mentioned isn't too expensive, only 100 dollars, I guess I can live with that.
That is assuming I will use the hp sas expander, which is 400 dollars, it's actually more expensive than the raid cards x 3, which one is actually a better implementation? assuming money is no problem, since the difference is minuscule.
 
The Supermicro board mentioned has an on board SAS controller. Just get a HP SAS expander and plug it into the SAS controller. You won't need breakout cables, just one cable per expander port to a backplane on the Norco for 4 drives. The RAID 1 array can be done with the on board Intel controller. Use a reverse breakout cable for this. It will use 4 SATA ports and go to one of the Norco back planes.
Wrong Supermicro board. The one listed above does not have the SAS controller. The one that does I can't find at any reputable site.

That's why when I researched it, I found people saying the norco can not accomplish this without a raid card that supports 24 drive, turns out it is wrong, since I don't really need raid on the rest of the drives(correct??). and they have also said that instead I could buy that 1300 supermicro case comes with expander and I only need to use one, 400 dollar raid card. I guess this setup is more for serious server needs in a server farm?
Yes.

How robust is the on board raid, i much rather use a dedicated raid card, the one danny mentioned isn't too expensive, only 100 dollars, I guess I can live with that.
That is assuming I will use the hp sas expander, which is 400 dollars, it's actually more expensive than the raid cards x 3, which one is actually a better implementation? assuming money is no problem, since the difference is minuscule.
Understand this: Unless you're planning on using SATA 6.0Gb/s SSDs or if the onboard RAID controller is dead, zero reason to get a dedicated RAID card for a RAID 1 ARRAY. ZERO REASON.

Also, the controllers I listed are NOT RAID controllers at all. They're just HBAs or Host Bust Adapters or basically just dumb storage controlllers. They cannot provide any sort of hardware based RAID at all. At best is software RAID but you don't want to use software RAID with WHS.

EDIT: The norco case uses Six internal SFF-8087 Mini SAS connectors. How do I connect that with the raid sata controller card you recommended? I guess I don't need the break out cables, instead I just buy 4 SFF-8087 Mini SAS connectors cables? (for the 2 cards) can the mobo support 3 of these cards?

I see that you recommended 2 sata controller, each one I assume has 2 ports that when used with 2 break out cables can attach 8 drives; I would then need 4 break out cables for the 2 cards? If in the future, I want to utilize all 24 drive bays, do I need to buy another one of these card plus 2 cables?

Do you have any recommendations for setup that can attach up to 24 drives? The reason is I already have quite a few drives in usb enclosures and in my old nas, together when I migrate over to the new server will already fill up quite a few bays, I imagine that I will use up all 24 bays soon.

So in short, for my needs, what is the best possible build? basically what I don't know is, how can I attach 24 drives, 3 of them in raid 1 with hot spare to a build. I would like a robust hardware raid 1 for the system drives though, my experience is that motherboard raid is bad?
Sorry I listed the wrong cables and the wrong quantity. You need four of these cables here:
$15 - Norco C-SFF8087-D SFF-8087 to SFF-8087 Multilane SAS Cable

You use those four cables to connect the two SAS controller cards to four of those SFF-8087 ports on the backplane of that case. That'll net you 16 hard drives right there. Then get this cable:
$13 - Norco C-SFF8087-4S SFF-8087 to Multi-lane SATA Reverse Break-out Cable

Use that to connect the four SATA ports on that Supermicro motherboard to the SFF-8087 port. Make sure to mark which HDD bays are being serviced by that SFF-8087 port since you'll be running the RAID 1 array off the motherboard's onboard RAID.

Now, get one more SuperMicro AOC-SASLP-MV8 card and two more SFF-8087 cables. Hook up the 3rd Supermicro card to the last SFF-8087 port in that backplane. That's it. You now have 24 hard drives connected with 2-3 of those drives in a RAID 1 array. Yes you do have one SFF-8087 port leftover on the 3rd SFF-8087 card but that's gonns be useful for future expansions. I'll talk more about this in a bit.

Ok now you mentioned the HP SAS Expander: Note that it's exactly that: an expander card. It cannot be used alone. It needs to be connected to a compatible controller card in order to use all those ports. Luckily for you, the AOC-SASLP-MV8 card I linked IS compatible with the HP SAS Expander. So if you want to take advantage of the HP SAS Expander, read this article:
http://www.servethehome.com/sas-expanders-build-jbod-das-enclosure-save-iteration-2/

Now, remember that spare SFF-8087 port on that 3rd Supermicro controller card earlier? Connect that spare port to this:
$30 - 1-Port SFF-8087 to SFF-8088 Adapter

Then grab a SFF-8088 to SFF-8088 cable ($103) to connect that adapter to the SF-8088 port on the HP SAS Expander which has been setup according to that article. There you now have a total of 48 hard drives plus 12 drives to spare.
 
Basically you only want RAID for the boot drive.
You need a controller that supports up to 24 drives for WHS.

Its a little more expensive than Dannys setup, but its the right cables (4224 has SFF8087 ports so no breakout cables;)) and I included 6 of them so all 24 drives can be connected. Also you are not using any expansion slots. The LSI SAS also allows for dual linking so you can connect both onboard SAS ports to the SAS expander for 12Gbps of full duplex throughput. Also if you decide in the future you want a more powerful RAID card to configure your drives in RAID6 you only need to buy a 4 port version (much cheaper) and connect to the SAS Expander.

I would get these items for your build
$280 - Supermicro X8SI6-F because it has 8 builtin SAS2 6.0 gbps ports. The LSI2008 ROC it uses supports expanders like the HP SAS Expander (recommended below). ATX size, LGA1156 for use with Core i3 CPUs for plenty of horsepower when needed but still low energy use and low heat. Has builtin IPMI 2.0 with virtual media over LAN and KVM-over-LAN support (you can access the BIOS and install the OS from a remote computer, no keyboard, mouse or monitor needed).
$300 - HP SAS Expander - PM Synergy Dustin for exact Price
$100 - Intel Core i3-530 CPU
$79 - Kingston 2 x 2GB ECC Unbuffered DDR3 1333 RAM
$80 - Antec Truepower New TP-750 750W PSU
$380 - NORCO RPC-4224 4U Rackmount Server Case
$90 - 6x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable - OEM

Total - $1309
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Basically you only want RAID for the boot drive.
You need a controller that supports up to 24 drives for WHS.

Its a little more expensive than Dannys setup, but its the right cables (4224 has SFF8087 ports so no breakout cables;)) and I included 6 of them so all 24 drives can be connected. Also you are not using any expansion slots. The LSI SAS also allows for dual linking so you can connect both onboard SAS ports to the SAS expander for 12Gbps of full duplex throughput. Also if you decide in the future you want a more powerful RAID card to configure your drives in RAID6 you only need to buy a 4 port version (much cheaper) and connect to the SAS Expander.

I would get these items for your build
$280 - Supermicro X8SI6-F because it has 8 builtin SAS2 6.0 gbps ports. The LSI2008 ROC it uses supports expanders like the HP SAS Expander (recommended below). ATX size, LGA1156 for use with Core i3 CPUs for plenty of horsepower when needed but still low energy use and low heat. Has builtin IPMI 2.0 with virtual media over LAN and KVM-over-LAN support (you can access the BIOS and install the OS from a remote computer, no keyboard, mouse or monitor needed).
$300 - HP SAS Expander - PM Synergy Dustin for exact Price
$100 - Intel Core i3-530 CPU
$79 - Kingston 2 x 2GB ECC Unbuffered DDR3 1333 RAM
$80 - Antec Truepower New TP-750 750W PSU
$380 - NORCO RPC-4224 4U Rackmount Server Case
$90 - 6x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable - OEM

Total - $1309
Basically a cleaner looking setup than the mess I made above :D

Though a few questions so I know what do next time:
1) Does the RPC-4224 have extra room somewhere for two non hot-swappable hard drives? I know the RPC-4020 does but I couldn't tell with the RPC-4224.

2) Does the LSI2008 ROC support basic RAID 0 and RAID 1? As in can I create a RAID 0 or RAID 1 array on that card using that card's BIOs? That's the one thing I couldn't figure out.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
It seems that nitrobass24's setup are more flexible if I ever wanted to do raid. This is great guys.
So if I go with nitrobass24's recommendation, I read in the post you linked to, the board you recommended can support "dual link" so I would connect 2 cables from the board to the expander? Which means, I will actually need 8 cables, 2 from board to expander, 6 from expander to the drive cage?
Thanks for the heads up to buy from synergy dustin, I hope he ships to Canada, I will PM him later when things are finalized.

Just in case, do you have any recommendations for the 4 port raid card if I ever wanted to go with raid? The new MS news about DE removal is making me nerves, might just go ahead with raid and other OS.
Areca ARC-1880I card from the post is fairly expensive, is this the card? http://www.newegg.ca/Product/Product.aspx?Item=N82E16816151071 I live in canada by the way, I have already located other parts. Any good 4 port raid card that can do raid 6, preferably dual linkable (just trying to maximize the hardware, prob will be useful when rebuilding the raid)?
Do you have other recommendation on the raid card other than this one? Seem like dual link is the way to go. Hopefully ones that have freebsd driver to maximize potentials. 1880 series have freebsd drivers.

If I were to get the raid card from the get go, I prob won't need this motherboard, right? I can just use the board Danny suggested? put the extra into the raid card.

I just want to say thank you all, for the excellent replies and suggestions. I learned so much; and to Danny's detail reply to my questions, thank you.
 
Last edited:
So if I go with nitrobass24's recommendation, I read in the post you linked to, the board you recommended can support "dual link" so I would connect 2 cables from the board to the expander? Which means, I will actually need 8 cables, 2 from board to expander, 6 from expander to the drive cage?
Yes

Just in case, do you have any recommendations for the 4 port raid card if I ever wanted to go with raid? The new MS news about DE removal is making me nerves, might just go ahead with raid and other OS.
Areca ARC-1880I card from the post is fairly expensive, is this the card? http://www.newegg.ca/Product/Product.aspx?Item=N82E16816151071 I live in canada by the way, I have already located other parts. Any good 4 port raid card that can do raid 6, preferably dual linkable (just trying to maximize the hardware, prob will be useful when rebuilding the raid)?
Do you have other recommendation on the raid card other than this one? Seem like dual link is the way to go. Hopefully ones that have freebsd driver to maximize potentials. 1880 series have freebsd drivers.

Just check the list of compatible cards in the HP SAS Expander thread:
http://hardforum.com/showthread.php?t=1484614

AFAIK, there are no dual linkable 4 port hardware RAID controllers.

If I were to get the raid card from the get go, I prob won't need this motherboard, right? I can just use the board Danny suggested? put the extra into the raid card.
Yeah you could probably do that.

Just want to point this out: if you're thinking about FreeBSD, then I highly recommend reading up on these links on ZFS:
Building your own ZFS fileserver
FreeBSD ZFS NAS Web-GUI

If you switch to FreeBSD + ZFS, you can ditch the true hardware RAID card since one isn't really needed for FreeBSD + ZFS RAID. In fact, I would only recommend the true hardware RAID route if you're using a Windows based OS. Most of the time you can get pretty good performance as well as reliability using Linux MDADM RAID or FreeBSD + ZFS RAID.

A few other options that don't require a true Hardware RAID card:
Some alternatives for those who are interested, perhaps we can start pooling info on the closest replacements for WHS that offer drive pooling and the ability to use drives of any size.

greyhole in amahi: http://wiki.amahi.org/index.php/Greyhole
http://code.google.com/p/greyhole/
http://www.amahi.org/

Flexraid:
http://en.wikipedia.org/wiki/FlexRAID
http://www.openegg.org/FlexRAID.curi

Unraid:
http://www.lime-technology.com/
http://lime-technology.com/wiki/index.php?title=UnRAID_Wiki
 
Basically a cleaner looking setup than the mess I made above :D

Though a few questions so I know what do next time:
1) Does the RPC-4224 have extra room somewhere for two non hot-swappable hard drives? I know the RPC-4020 does but I couldn't tell with the RPC-4224.

2) Does the LSI2008 ROC support basic RAID 0 and RAID 1? As in can I create a RAID 0 or RAID 1 array on that card using that card's BIOs? That's the one thing I couldn't figure out.

1) No they removed that top tray that is present in the 4020 and 4220 to make room for a 6th row of Hotswap drives.

2) the builtin LSI ROC supports RAID 0/1/10. It also support RAID5 with the purchase of an additional key;)
 
There are no 4port dual linkable RAID cards because you need 8 ports to dual link:)
No if you go ahead and get a RAID card then you dont need the mobo i listed.

As far as RAID Cards go I would highly recommend an Areca Card. My First choice would be an 1880i ($600) but its kind of expensive. If you want to save some money you can do what I did and grab a used 1680x for around $450 and just use an external SFF8088 cable to the External Port on the HP SAS Expander. The downside is you are getting a last generation card (though still one of the best available today) and you cant dual link with the HP SAS expander.

Greyhole is still very experimental and I would not put any data you like on it.
FlexRAID is a great idea, but until FlexRAID Live becomes available I dont see it being a very good option either.

UnRAID is just a software implementation of RAID4. It uses a dedicate parity disk that has to be larger than all of the data disks. Since you were considering WHS (JBOD) single drive performance may be all you need/want so this might be a good fit. The only problem with Unraid is the licensing scheme sucks $120 and you can only use up to 20 drives.

ZFS - Free, decent performance, good drive protection. BSD OS is difficult to use IMO (Im a Windows guy) You cannot expand arrays, you have to build a new array and add that to the drive pool. Its only free if your time is worthless.


Basically the conclusion I came to when VAIL still had DE was that past about 10 drives DE was not efficient with my storage, but i still wanted all the features VAIL had to offer. I opted for the RAID 6 route and pass 10TB volume to Vail (Vail supports GPT Disks natively). I add it to the storage pool but i turn off all duplication.
 
Thanks for the writeup nitro!

Definitely gonna be using that writeup in many future "What file server OS should I use?" threads that we're gonna get.
 
Back
Top