The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Here's my new setup. My HP EX470 had a meltdown last month so I broke down and built my own.

13.5TB advertised, 12.28TB formatted

Case: Cooler Master Elite 335
PSU: Antec EA430W
Motherboard: Asus M4A785-M
CPU: Athlon II X2 245
RAM: 4GB Corsair ValueRam DDR2-800
GPU: Onboard ATI 4350
Controller Cards: Supermicro AOC-SASLP-MV8, eSATA card for Sans Digital TR4M-B (unused)
Optical Drives: None
NIC: Intel PWLA8391GT
Hard Drives: 13.5TB WD Green drives (see pic for model numbers)
Battery Backup Units: Belkin F6C900 900VA
Operating System: WHS

I mostly use this setup for HD video and bluray rips, but I've also got about 75GB of music and a few hundred thousand pictures. I backup my documents, photos, and my favorite videos on a pair of 1TB drives that I keep offsite in a classified facility where they are very safe.

I've got room for 3 more drives in the case, and then I have the Sans Digital enclosure for 4 more drives if necessary. I'd prefer not to use that since the fan on it is rather loud.

Server: (disregard the netbook)
2kn7dw.jpg


Console:
25a2ayp.jpg
 
Here's my new setup. My HP EX470 had a meltdown last month so I broke down and built my own.
So your 8TB tops WHS machine melts down and you build a 2x larger capacity one?

VERY [H] indeed! :p

Case: Cooler Master Elite 335
Very clean case. Most 33x series cases I've seen are hideous, except maybe one or two, good to see CoolerMaster can still make low-priced ATX cases.

All in all, interesting rig. Now, can we see the hardware p0rn? lol

Cheers.

Miguel
 
Miguel: On a series of your posts, you spoke clearly about the importants of using the share shortcuts on your WHS instead of your 'D:\' if you were RDC'ed in. My question is, typically when you transfer files in one HD, the move (OS shortcut move) is instant. But when you're transferring files in between two drives, you have to wait for the slow movement of all the data. When we use the shortcut, are we going to have to wait for this slow process, even though they are in the same drive, and only because it all has to be re-routed through the NIC?

Additionally, are there any threads around here about the benefits (and individual's recommendations) of Raid 6 (hardware or software) vs. WHS DE? I am curious, when building a massive home server, with a pretty generous budget, would one choose raid over WHS?

Thanks for the help!
 
Additionally, are there any threads around here about the benefits (and individual's recommendations) of Raid 6 (hardware or software) vs. WHS DE? I am curious, when building a massive home server, with a pretty generous budget, would one choose raid over WHS?

Thanks for the help!

If you want parity on everything, R6 is cheaper and more efficient than drive duplication for online copies of data on a large scale (a 24-port 1280ml costs roughly the same as 5-6 2tb drives, so if you're beyond that number, raid controllers offer better uptime for the cost). Neither option is a backup in itself though, so you have to weigh simplicity of WHS vs RAID expansion or more up front cost for buying 8-12 drives at a time (I'd suggest doing 2-3 smaller arrays rather than one 24 drive array, for example).
 
When we use the shortcut, are we going to have to wait for this slow process, even though they are in the same drive, and only because it all has to be re-routed through the NIC?
The data is never routed through the NIC if you're transferring from say "C:\" do "\\WHS\Shares".

The "\\WHS" reference is going to be interpreted by the OS as a "me" destination. That will cause the data to be routed to the SMB/CIFS filter, then to the DE filter, and finally to the destination drive, all without leaving the computer. Hell, you can even implement this on a computer without a NIC, just add a standard M$ "dummy" TCP/IP driver (also called the loopback device) and you can still communicate over SMB.

The slower performance comes, to my knowledge (as I've said before, when I talk about something it's usually from bits and pieces I've read through the years, not actual courses taken, so I might be wrong), from the HIDEOUS SMB/CIFS implementation on pre-Vista OSes, as well as the extra DE layer.

In short, pre-Vista Microsoft OSes create 64KB-sized SMB packets, each one carrying about 48KB of actual data (if memory serves me right). Each packet has to be created, sent to the DE filter so its place is determined, then unpacked again. This causes a HUGE I/O load on system. Just think how many 48KB-sized packets you need to handle a 10MB file...

Veil should address this issue somewhat. Vista-based OSes can create up to 2MB-sized SMB packets, so I/O load will probably be closer to "normal" behavior.

Additionally, are there any threads around here about the benefits (and individual's recommendations) of Raid 6 (hardware or software) vs. WHS DE? I am curious, when building a massive home server, with a pretty generous budget, would one choose raid over WHS?
My guess is, it depends on what you want to do, and why.

Let me explain it: first of all, DE has NOTHING to do with RAID. DE deals with the block contents, RAID deals with the clusters/stripes. The ability to handle chunks of data directly as written by the OS is what allows DE to work just like a JBOD array and still provide duplication services.

RAID arrays, on the other hand, can be much more redundant than WHS. Dual and triple-parity (with hot spares) is not something WHS can handle. WHS is good for single redundancy and high probability of data retrieval if you loose the whole array (you can actually pull the disks and the data *should* be available on other OSes). Short of RAID1, you can't really do that with RAID. WHS was not designed to be an always-available solution, RAID was.

Performance-wise, they are terribly different, too. RAID, especially RAID5 or RAID6 with good dedicated storage processors are TERRIBLY fast. Just the other day (there's a link on this very thread, I believe) I read about a RAID6 array that easily handled 400MBps+ of in-array copy (meaning the array was feeding itself with data from other drives in the same array). WHS is only as fast as a single disk allows it. Which is just fine with a domestic setup (I don't really know that many people with multi-Gbps home networks...): Gigabit IS your top speed in a home, and a single drive can handle it just fine, even with more than one data stream (just a couple of hours ago, I moved data at 80MBps+ from my WHS machine, and the limiting factor was actually the HDD combo (Samsung 1TB F1 on the WHS and a 500GB Samsung F3 on the desktop). The same happened with my previous HDD, 60MBps+ was the limit of the Samsung 321KJ 320GB drive.

It really boils down to what you want/need. But do keep in mind that an Atom-based WHS with a PCI-based storage controller and one Port Multiplier (meaning something like 8 drives hanging from the PCI bus) is a perfectly capable WHS. At home I don't think you REALLY need dedicated storage controllers (meaning anything with a dedicated CPU, not any of those "dumb" cards with the ability to do single-disk arrays). But that's me.

Cheers.

Miguel
 
In short, pre-Vista Microsoft OSes create 64KB-sized SMB packets, each one carrying about 48KB of actual data (if memory serves me right). Each packet has to be created, sent to the DE filter so its place is determined, then unpacked again. This causes a HUGE I/O load on system. Just think how many 48KB-sized packets you need to handle a 10MB file...

So, as I understand it, hypothetically if I were to move a file from D:\ drive to D:\share\video (WARNING: don't do it), the transfer would be instant (because the OS is simply moving a shortcut to where the data is located on the drive), but if I were to move a file from \WHS\ to \WHS\share\video, which is the correct and recommended method, it will ultimately require a substantially greater amount of time to transfer?
 
So, as I understand it, hypothetically if I were to move a file from D:\ drive to D:\share\video (WARNING: don't do it), the transfer would be instant (because the OS is simply moving a shortcut to where the data is located on the drive), but if I were to move a file from \WHS\ to \WHS\share\video, which is the correct and recommended method, it will ultimately require a substantially greater amount of time to transfer?
Potentially, yes. Since SMB file transfers are not FAT/MFT-aware, the system might need to do all the leg work.

Unless there is something about the Distributed Link Tracking Client service that can speed that transfer, though I don't know how that works.

The best way to check that is to try it with a non-sensitive file, preferably on a non-production machine. Or, to check the hit without DE being involved, just do that from ANY Windows machine with the "Client for Microsoft Networks" network client installed.

Cheers.

Miguel
 
Potentially, yes. Since SMB file transfers are not FAT/MFT-aware, the system might need to do all the leg work.

Unless there is something about the Distributed Link Tracking Client service that can speed that transfer, though I don't know how that works.

The best way to check that is to try it with a non-sensitive file, preferably on a non-production machine. Or, to check the hit without DE being involved, just do that from ANY Windows machine with the "Client for Microsoft Networks" network client installed.

Cheers.

Miguel
Just checked and it indeed does not do the transfer from \whs\share1 to \whs\share\video instantly.
 
Last edited:
Just checked and it indeed does not do the transfer from \whs\share1 to \whs\share\video instantly.

This, to me, is a definite limitation to DE (compared to RAID). I am frequently moving large files (blu-ray rips) around my storage pool (from encoded folders, to video folders). Is it correct for me to assume that RAID can effortlessly move a file from say D:\share to a simple sub-folder, like D:\share\video, without the delay in time. For me, just to move a file from one folder (inside my WHS) to the next using the \whs\ shortcuts, It's almost like i'm transferring the file from another computer to my WHS.
 
Here's my new file server. It's running Debian Linux 5.0.3.

Case: Ultra M923 ATX Black
Enclosure: ICY DOCK MB455SPF-B 5 in 3 SATA I & II Hot-Swap Internal Backplane
Motherboard: Asus P5K Premium
CPU: Intel Q6600
Controller Cards: Dell Perc 5/i (Running Raid5)
Optical Drives: None
Hard Drives: 8 2TBs (6 are WD Caviar Green and 2 are Seagates) (1 is the hotspare)

It hosts tons of HD movies and videos recorded off cable and satellite. I also store all the HD videos I've shoot with my former camera the Sony FX1 and my current the Sony EX1. I've also dumped data from old drives in-case I ever need to look for some word document or picture from an old PC.

My previous fileserver used 7 1TBs drives using Linux software raid.

raid_2.gif

raid_1.gif







 
Here's my new file server. It's running Debian Linux 5.0.3.

Controller Cards: Dell Perc 5/i (Running Raid5)
Very nice. I'm thinking I'll pick up a Perc 5/i for my next filer box. Any problems getting it to work with Debian? Any particular reason for using the on-card RAID5 versus Linux md RAID5?
 
The Linux software raid was fairly easy to manage, but I wanted to use a hardware raid card to get faster rebuild times in case a drive died. md atleast was able to be managed somewhat via the web-based GUI in webmin, unlike the PERC card which I manage via command line. There is some DELL OMSA web-based management software for it, but I couldn't get it to install. I think it had something to do with my motherboad just being a regular desktop board and not a server board.

It worked right out of the box with Debian.. I do have to say though, just this afternoon the array did an auto rebuild from the hot-spare after saying one of the drives died.. Sorta scary since the drive seems to be fine. Really not sure what happened, but I was constantly getting this error in the message log about a sata drive going offline which is gone now so maybe the drive really is bad..
 
Check your power supply specs - I had this issue with a simple RAID 1 setup - kept on getting a dropped drive but when checking on another box no errors. Eventually swapped out the PS for a beefier one (same supposed power but a much better quality) and it solved all the issues.
 
My server doesn't qualify. I am sad. :( <--- see. Sad. [ok, its time for bed at this point, sleep deprivation is making me go slightly loopy
 
The Linux software raid was fairly easy to manage, but I wanted to use a hardware raid card to get faster rebuild times in case a drive died. md atleast was able to be managed somewhat via the web-based GUI in webmin, unlike the PERC card which I manage via command line. There is some DELL OMSA web-based management software for it, but I couldn't get it to install. I think it had something to do with my motherboad just being a regular desktop board and not a server board.

It worked right out of the box with Debian.. I do have to say though, just this afternoon the array did an auto rebuild from the hot-spare after saying one of the drives died.. Sorta scary since the drive seems to be fine. Really not sure what happened, but I was constantly getting this error in the message log about a sata drive going offline which is gone now so maybe the drive really is bad..

Try the LSI MegaRAID storage manager, you should be able to access the array using a local IP address and correct login credentials in the server window, then admin as if the array was on your own PC.
 
Okay so what's the current record? Is the OP absentee and not updating the list anymore? I ask because my next build, 100Tb in a single chassis, should be done in a few weeks.
 
Okay so what's the current record? Is the OP absentee and not updating the list anymore? I ask because my next build, 100Tb in a single chassis, should be done in a few weeks.
If you want a huge chassis for something like that, I have a GHS-2000. I've calculated that I could fit the backplanes of 2 Norco cases in there with room to spare. I can't be bothered modding it however. I know of at least one case out there that involves no modding to fit 100TB however (but it will cost you).
 
Okay so what's the current record? Is the OP absentee and not updating the list anymore? I ask because my next build, 100Tb in a single chassis, should be done in a few weeks.

If your actually building a 100TB server, then I think, you will have the most storage server here at [H].
 
100TB?!?! DAMN!!! I demand pics of the chassis.

I'm almost at the 10 mark, once I go over, I'll submit my server. Its at 9TB right now.
 
Okay so what's the current record? Is the OP absentee and not updating the list anymore? I ask because my next build, 100Tb in a single chassis, should be done in a few weeks.
yeah I got tired of updating the list
thinking about changing the thread to just let anyone post their systems, not keep up with who has the most storage
 
I browsed completely through this thread. Many of the builds are impressive. I have never seen the use for that much storage....until I upgraded from a 250GB drive to a 1TB drive....and began copy my media to it for use as a media server on my PS3....and well....it's nearly full. I can see my self joining the club in the not so distant future.
 
any day now and its my time to join the 10tb+ in 1 system club :)
had to RMA my first norco-4020 as it started burning on one of the backplates and 3 others shorted out, but got a new one and time to start building!

if anybody has ever wondered how a toasted norco backplate looks like here it is, sorry for the poor quality but taken with my cellphone.
front:
bilde0068.jpg

back:
bilde0069.jpg

inside on casing:
bilde0071.jpg
 
54.66 TB

rig1.jpg

rig2.jpg

rig3.jpg


23.16 TB
This is my main rig for general computing and gaming.

Coolermaster HAF 932
Antec Signature 850W
Asus P6T Deluxe V1
Core i7-920
12GB Corsair DDR3-1333
XFX Radeon 5970 BE
Areca ARC-1231ML w/ BBU
Highpoint 4320 with external mini-SAS bracket
LG 8x BD-RW
11x Seagate 2TB LP (ST32000542AS) in RAID5 with hotspare
2x Intel X25-M 2G 80GB (SSDSA2M080G2GC) in RAID0
2x Hitachi 500G (HTS545050B9A300) in RAID0
Windows 7 x64 Ultimate

24.5 TB
This is simply a JBOD unit for disks, no computer inside.
I turn this on or off as needed for offline backup.

Norco 4020 (all fans replaced w/ Panaflos)
PCP&C 750W Silencer
Chenbro 28 port SAS expander
3x Seagate 7200.11 1.5TB (ST31500341AS) in RAID5
4x Samsung F1 1TB (HD103UJ) in RAID5
8x Samsung F2EG 1.5TB (HD154UI) in RAID5
4x Seagate 7200.11 1TB (ST31000340AS) in RAID5

7.0 TB
This is my download station / NAS / glorified external displayport monitor

27" iMac
Core i5 2.66GHz
8GB DDR3-1066
Radeon 4850
1x WD Black 1TB
Snow Leopard
OWC Mercury Elite-AL Pro QX2 connected by Firewire 800
and also shared by Gigabit Ethernet for NAS functionality using
4x Seagate 7200.11 1.5TB (ST31500341AS) in RAID5
 
Last edited:
is there a thread for custem built cases for file servers and also noticed system parts are labeled but very few have labled the 5.25 drive arrys i been looking for a 5-in-3 hot-swap modules but i wont a vertical version all i seem to find is the horizontal version also had an idea for load on cable when wired in series insted of them been in radial why not put them in a ring main style so the cable can be a grade lower and less overload on the feed as with a ring they would be ethernly loaded

EDIT:
never mind about the vertical 5 in 3 hot swap module dident think about mesurments lol but have an idea but will have to be a custem case holding 4 of these units if all right total space would be 40tb in one case unfomatted not sure of but if i ever get the money i will build this had an idea for larger but the 8-in-2 bay for 2.5 drive is way to expensive and the 2.5 drive i think are only just hitting the 1.5 tb mark
 
Last edited:
yes sorry have Dyslexia wrighting in not my strong part my strong point is working out how things work and bulding things
 
For my build I the top is an athena power 3 in 2 SATA multibay and the bottom is an icy dock 3 in 2 SAS multibay. The temps in the athena power one are consistently about 5 degrees hotter than the icy dock one (same drives), but the icy dock one beeps at me on bootup because somehow the fan sensor thinks its dead (the fan still works though). Also, I really had to shove the icy dock one into the case because it barely fit.

I used to use the Kingwin 4 in 3 toolless bays but their cooling performance is bad and the wiring is a mess (each bay needs its own adaptor to provide the indicator lights).

One thing to consider about the 5-in-3 multibays is that you need a case w/out the metal tabs since there won't be room for the notches on the sides. Most modern cases have those tabs.
 
the tabs wouldent be to mutch of a problem my idea was to get a duplicater case and have 4x 5 in 3 bays but vertical it wasent till i tried to tilt a dvd drive did i relise the diff in the width and hight was then thinking of the 4 in 3 but that drops me drives and the idea is to have this as a tower one side of desk while main pc other side of desk
 
Last edited:
somthing else i would like to ask i been looking at 8-Channel Serial ATA II 64-bit PCI-X Card i have read these are backward compatible with the standerd pci slot what i would like to know is do i loose speed or will i loose some of the sata ports
 
somthing else i would like to ask i been looking at 8-Channel Serial ATA II 64-bit PCI-X Card i have read these are backward compatible with the standerd pci slot what i would like to know is do i loose speed or will i loose some of the sata ports

Probably should be it's own topic as it isn't related to this. But are both the PCI slot and PCI-X cards you are using 3V or 5V? PCI = 133MB/s, which is basically the sequential read speed of a current generation hard drive. Performance wise, this would be very bad.
 
How much computing power is needed to power a hardware RAID 5/6 array? Meaning, what kind of processor, mobo, and memory I'll need to ensure smooth performance? I'm contemplating of building my own system and I don't want to spend anymore than necessary for the CPU, mobo, and RAM. The server will only be used as a file server and streaming content.

Is FreeNAS the best OS for me to use? I've been doing a lot of reading lately and I think I want a Linux distribution with ZFS support, but since ZFS is an unproven file system, I'm not sure if I should even use it.

In the next few weeks, I'll be getting a MS Technet subscription, so I'll also have access to Windows Server 2008. The question is whether this OS is an overkill for what I need to do. I just need redundancy for my data.
 
Is FreeNAS the best OS for me to use? I've been doing a lot of reading lately and I think I want a Linux distribution with ZFS support, but since ZFS is an unproven file system, I'm not sure if I should even use it.

There is no ZFS for Linux. There are alternatives but they are still immature. ZFS can be used with FreeBSD and OpenSolaris.
 
you can post, but it wont got to second post, also doesnt seem like EnderW even care to update that thing
 
Status
Not open for further replies.
Back
Top