The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
I really like that ^^^^ case,



Pro's :

Air in over hdd's and pulled out the back TOP
PSU on the bottom
 
dsc01401storage.jpg


Seagate 1TB x14 (in the StorageTek tray [bottom, above UPS])
Seagate/Samsung 1TB x6 (in the SuperMicro server [top])
 
My updated setup. It's a HTPC with a bunch of shared hard drive space.

Xeon X3210
Asus P5K-E
4GB DDR2-800
HD 4650
Silicon Image 4-port PCI SATA card
XFX 650w PSU
Fractal Design Define XL

It's quieter than my old Lian Li setup, and far easier to move drives around. Right now all ports are full. 10 ports (6 mobo, 4 PCI card) - four Hitachi 5K3000 2TB, two Samsung F3 1TB, two WD Blue 500gb, a 32GB boot SSD, and a DVD drive. Eventual upgrades (maybe Christmas) will be going with a Blu Ray drive. possibly adding in four more 5K3000s and removing the Blues, and getting a PCIe SATA card.

IMG_4236.jpg
 
Just re-assembled my media server!

Four 2TB drives, five 1TB drives, I'm now rocking 13 TB :D

YZO7D.jpg


This machine hides behind a 42" plasma TV in the family room, quietly displaying Windows Media Center for the house's primary TV. The entire house is wired with gigabit ethernet, allowing me to stream to three additional rooms (the bedrooms) which have their own low-power media center PC's.
 
It would be nice if they mentioned what those warmer or cooler temperatures were. Interesting none the less.


i believe the ambient temperature was around 80-85f, idk what temperature the drives were running at though..assuming googles density is insane (which it probably is), i'd be willing to bet a bit higher. i'm sure some searches could turn up the answer you're looking for though
 
Ok guys. Time for me to take the lead again. Should have them installed and running live (and some new pictures) within a week or so.

Here is what 90TB of retail disks looks like =) Picture taken right outside of frys:



Also City of Industry fry's are a bunch of lame asses. I had to go to two fry's as none of them had 30 in stock. Well after getting 18 at one I luckily called city of industry (to have them put some aside) just to have them tell me they wouldn't sell me more than 2 at a time.

They told me some BS like it was in the fine print in the newspaper or some shit but I know its crap. Frys has done limit quantity 2 in deals before and in that case you can't checkout more than 2 at once. One fry's was nice and let me do it in several transactions in the past but this time at the other two frys I bought 18 and 12 drives at both had no problems checking out 18 and 12 disks all on one transaction.

I hate city of industry fry's!
 
Ok guys. Time for me to take the lead again. Should have them installed and running live (and some new pictures) within a week or so.

Here is what 90TB of retail disks looks like =) Picture taken right outside of frys:



Also City of Industry fry's are a bunch of lame asses. I had to go to two fry's as none of them had 30 in stock. Well after getting 18 at one I luckily called city of industry (to have them put some aside) just to have them tell me they wouldn't sell me more than 2 at a time.

They told me some BS like it was in the fine print in the newspaper or some shit but I know its crap. Frys has done limit quantity 2 in deals before and in that case you can't checkout more than 2 at once. One fry's was nice and let me do it in several transactions in the past but this time at the other two frys I bought 18 and 12 drives at both had no problems checking out 18 and 12 disks all on one transaction.

I hate city of industry fry's!
I like your case. I assume it comes in red or metallic grey too? Assuming you just plan to duct the A/C instead of having separate fans? ;)
 
Update from Post 1261 in this very thread, I've just gotten my new server operational after nearly a week of testing, configuring and copying 6tb of data from my old server... :(

Amount of storage in the following system: 18.6TB after formatting and RAID-6 array creation

Case: Bitfenix Shinobi w/ X-Case 5in3 Hotswap
PSU: Thermaltake Toughpower 775W Modular PSU
Motherboard: MSI 890FXA-GD65 (only consumer board I could find that likes the controllers)
CPU: AMD Athlon II X2 250
RAM: Kingston Hyper-X 8gb DDR3
GPU: Onboard GPU
Controllers: 2x IBM ServeRAID M1015 SAS/SATA Controllers
System Drives: Western Digital 2.5" SATA-II 160gb running in RAID-1 with EXT4 (Software RAID)
Data Drives: 12x Hitachi 5K3000 3.5" SATA-III 2TB Coolspin in RAID-6 with JFS (Software RAID)
UPS: APC 1000 UPS
NIC: 2x Intel Gigabit PRO/ 1000CT PCIe (bonded)
Operating System: Ubuntu Server 11.04 x64

prometheus_outside.jpeg

prometheus_internal.jpeg


I still have some things to do to finish the build, such as trying to tame those SAS>SATA cables, as well as flip the PSU so that the fan is facing inwards rather than outwards as the exhaust is rather hot ATM :eek:( Also need to swap the fan on the 5in3 enclosure as its currently pretty loud and two of the drives are spiking at nearly 48C!! :eek:

Central storage server for all of my media/files etc hence the R AID-6 requirement. I did consider ZFS under Solaris however I'd already committed to this setup and didn't want to waste the time that I'd already spend getting things configured the way I want. I still have the ability to add another four drives to the system (via a spare port on the 2nd M1015) however that would require a case change.

The speed of this thing is incredible considering that I've come from a hobbled together WHSv1 system that used PCI controllers. The following was a quick DD test on the system once the array had been properly setup:

Code:
root@prometheus:/media/store# hdparm -tT /dev/md0
/dev/md0:
 Timing cached reads:   4754 MB in  2.00 seconds = 2376.87 MB/sec
 Timing buffered disk reads: 3260 MB in  3.00 seconds = 1085.49 MB/sec
 
root@prometheus:~$ dd bs=1M count=40000 if=/dev/md0 of=/dev/null
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 36.6576 s, 1.1 GB/s

Overall definitely VERY happy with the upgrade and the money spent to get to this even though its not as big as some of them here ;)

Backups will be handled by my previous server being rebuild to run Ubuntu again but with smaller arrays using the existing drives
 
Last edited:
praetorian, do you hate your hard drives ? I just ask because Shinobi does have a rather limited intake and putting 12 hard drives in there is crazy. I have 10 drives in my R3 and even they are in 40-45C region, with 2 fans directly blowing at them with relatively good airflow.
 
praetorian, do you hate your hard drives ? I just ask because Shinobi does have a rather limited intake and putting 12 hard drives in there is crazy. I have 10 drives in my R3 and even they are in 40-45C region, with 2 fans directly blowing at them with relatively good airflow.

I must admit that I'm slightly concerned about the temperatures of three of the drives, two of which are in the 5-3 enclosure which has its down 80mm high flow fan. The drives at the front are all reading between 35-38C even when under heavy load with the one exception that is between the fan edges.

I will be adding another 120mm fan to pull air from underneath the system which should help overall temperatures and trying to sort out cooling to the 5in3 stack ... If not, then watch this space.
 
I am surprised to see the temperatures you describe, either you live in very cold country, or the hard drive sensors mut be bad. Air intake of Shinobi is extremly limited according to most reviews, but maybe it works good enough in your case.

And 5-to-3 bays ? I had once a 4-to-3 bay, sold it ASAP. 45C-50C temperatures were standard in that thing.
 
I am surprised to see the temperatures you describe, either you live in very cold country, or the hard drive sensors mut be bad. Air intake of Shinobi is extremly limited according to most reviews, but maybe it works good enough in your case.

And 5-to-3 bays ? I had once a 4-to-3 bay, sold it ASAP. 45C-50C temperatures were standard in that thing.

Well I live in the UK so its typically wet or cold here most of the time except when we get a brief spot of sunshine to give us all of our Vitamin D :D Also these are 5400/5900RPM drives so they naturally run slightly cooler

I know what you mean about the reviews as I did read them all before I purchased the individual components. Most of the reviews of the case however were done using quad-core i7 monsters that would be very out of place doing simple file serving etc :)

As for the 5-in3 bay, its a hotswap caddy system rather than just a normal 5-in3 stacker. I've used them before and a lot of them are used within this very thread, with the exception of everybody using Norco's, and the current temps of them are around the same as the internal drives with the exception of the two showing higher due to them being nearer the inside of the fan which I can do nothing about. I'm going to change the fan for it once I fine an 80mm which is both quiet and shifts a lot of air which I'm well aware most of the time are totally incompatible ;)

Apologies if I'm teaching you to suck eggs with my comments as that's not my intention....
 
My Western Digitals are 5400RPM as well. But it could be down to fan choice, my fans run only at 700RPM :). This is only 16GB, no RAID (6xWD20EARS+4xWD10EADS)... And yes, that port multiplier is an ugly solution :D.
componentsfssmall.jpg

backfssmall.jpg


Going it expand by 8GB using a iSCSI storage with Intel H61 board + Intel SASUC8I + 4xWD20EARX + Seasonic X-660 + 4GB RAM + FreeNAS. Yes, i know the PSU is overkill, but it was sitting there from the moment i received it back from RMA.
 
That could be half of the problem unless they are high flow, quiet fans.

The two in front of the drives and the rear are BitFenix Spectre's both sucking inwards and the top two fans are BitFenix Spectre 140mm as exhausts. Currently the fan in the 5in3 is unknown but its definitely loud and not pulling enough air :( Will probably change it over for one of my Yate Loon's or for something like a Xilence 80mm which were also good with high airflow but didn't sound like an old-school Delta :D :D

Nice rig though faugusztin :)
 
The speed of this thing is incredible considering that I've come from a hobbled together WHSv1 system that used PCI controllers. The following was a quick DD test on the system once the array had been properly setup:

Code:
root@prometheus:/media/store# hdparm -tT /dev/md0
/dev/md0:
 Timing cached reads:   4754 MB in  2.00 seconds = 2376.87 MB/sec
 Timing buffered disk reads: 3260 MB in  3.00 seconds = 1085.49 MB/sec
 
root@prometheus:~$ dd bs=1M count=40000 if=/dev/md0 of=/dev/null
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 36.6576 s, 1.1 GB/s

Overall definitely VERY happy with the upgrade and the money spent to get to this even though its not as big as some of them here ;)

Backups will be handled by my previous server being rebuild to run Ubuntu again but with smaller arrays using the existing drives

That is pretty nice speed. Here is what I got on this coraid system that I am using for burnin and to move over some data for my dad (so its over at his place) and thus needed more than a disk enclosure. Raid really slows down the machine in the case of a low end (dual core 1.86 Ghz core 2 duo) CPU:

Code:
Sequential read:

sabayonx86-64 ~ # dd bs=1M count=25000 if=/dev/md0 of=/dev/null
25000+0 records in
25000+0 records out
26214400000 bytes (26 GB) copied, 46.3462 s, 566 MB/s


Sequential Write:

sabayonx86-64 ~ # dd bs=1M count=10000 if=/dev/zero of=/dev/md0
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 37.8154 s, 277 MB/s


CPU usage while doing sequential read:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
16511 root      20   0  9292 1752  608 R   98  0.2   0:32.57 dd
  292 root      15  -5     0    0    0 S   17  0.0   0:08.01 kswapd0


CPU usage while doing sequential write:


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
16126 root      15  -5     0    0    0 R   82  0.0   0:49.14 md0_raid5
16451 root      20   0  9292 1752  608 D   51  0.2   0:15.95 dd


Direct I/O sequential read (reduces CPU usage):

sabayonx86-64 ~ # dd bs=1M iflag=direct count=25000 if=/dev/md0 of=/dev/null
25000+0 records in
25000+0 records out
26214400000 bytes (26 GB) copied, 35.5604 s, 737 MB/s


CPU usage while doing direct I/O read:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
16671 root      20   0  9292 1752  608 D    7  0.2   0:00.95 dd
    1 root      20   0  3720  608  504 S    0  0.1   0:01.51 init
    2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd


I think the read speed (direct I/O) is bottlenecked by the PCI-X bus as its using two of the marvell 8 disk SATA PCI-X controllers but not too bad of performance.

I must say that getting the OS installed and botting off a raid6 mdadm was a real pain in the ass. Mainly because the damn installers on some distributions seem to refuse to even try to install on mdadm =(

I had to install on a different drive and then rsync the data back over. I am technically booting off this 32MB IDE flash disk thingy that the machines came with (used by coraid), basically one of these things:

http://www.ravirajtech.com/diskonmodule.html

It *barely* fit the kernel, initrd, and other stuff on it but I was finally able to get the machine booted:

Code:
mdadmbrick ~ # df -H
Filesystem             Size   Used  Avail Use% Mounted on
rootfs                  31G   7.5G    24G  25% /
udev                    11M   2.8M   7.8M  27% /dev
/dev/md126              31G   7.5G    24G  25% /
rc-svcdir              1.1M   144k   906k  14% /lib64/rc/init.d
tmpfs                  523M   209k   523M   1% /dev/shm
/dev/sda1               31M    28M   1.8M  94% /boot
/dev/md127              39T   4.8G    39T   1% /data
mdadmbrick ~ #

I must say it is amazing to see almost 40TB of usable storage with only 15 disks. 3TB disks do make quite the difference.

Really high CPU usage/load and pretty slow initialize (gonna take over a day and a half):

Code:
mdadmbrick ~ # uptime
 07:47:52 up  2:19,  8 users,  load average: 1.47, 1.91, 2.79
mdadmbrick ~ #      


top - 07:48:09 up  2:19,  8 users,  load average: 1.44, 1.88, 2.77
Tasks: 176 total,   4 running, 172 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.5%us, 53.4%sy,  0.0%ni, 19.9%id,  0.0%wa,  0.0%hi, 26.2%si,  0.0%st
Mem:   1020840k total,   746680k used,   274160k free,       12k buffers
Swap:        0k total,        0k used,        0k free,   139652k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 5266 root      20   0     0    0    0 R   74  0.0  90:57.69 md127_resync
 5264 root      20   0     0    0    0 R   21  0.0  31:15.40 md127_raid6
 4466 root      20   0     0    0    0 R   21  0.0   0:55.28 kworker/0:2
 4212 root      20   0     0    0    0 S   21  0.0   0:55.49 kworker/0:1
 4688 root      20   0     0    0    0 S   13  0.0   0:24.30 kworker/1:3
28602 root      20   0     0    0    0 S   13  0.0   3:11.42 kworker/1:4
21243 root      20   0 59976 4628 1284 S    1  0.5   1:23.54 udisks-daemon


mdadmbrick ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md127 : active raid6 sdb2[0] sdp2[14] sdo2[13] sdn2[12] sdm2[11] sdl2[10] sdk2[9] sdj2[8] sdi2[7] sdh2[6] sdg2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
      38063464192 blocks super 1.0 level 6, 128k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
      [=>...................]  resync =  8.2% (241248952/2927958784) finish=2202.1min speed=20333K/sec

md126 : active raid6 sdb1[0] sdp1[14] sdo1[13] sdn1[12] sdm1[11] sdl1[10] sdk1[9] sdj1[8] sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1]
      29996928 blocks level 6, 128k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]

unused devices: <none>
mdadmbrick ~ #

I still think that with raid6 mdadm software raid really does use quite a bit of CPU and can bog down the system. I can definitely see why hardware raid is worth while just for the parity off-loading. ZFS did seem to be much faster though so I guess linux mdadm's algorithms aren't as efficient or something.
 
I still think that with raid6 mdadm software raid really does use quite a bit of CPU and can bog down the system. I can definitely see why hardware raid is worth while just for the parity off-loading. ZFS did seem to be much faster though so I guess linux mdadm's algorithms aren't as efficient or something.

Unfortunately with the M1015 I'd have to have purchased two of the hardware keys to even get to RAID5 which I wasn't prepared to go for as a) they're expensive and 2) I prefer having at least a two disk redundancy. I'd definitely agree though that the parity offloads would be beneficial :eek:) ZFS was the other choice which I almost went with however I'm not a fan of Solaris (well not in a home environment anyways, work is a different situation) and the lack of a proper port to Linux is still a long way off so until then I'd roll the dice with MDADM and JFS (thanks to you houkouonchi for the inspiration!)

I was getting roughly 30,000k per second when doing the initial build/sync of the array until I played with the tuning parameters in sysctl. Once I'd done that I was hitting 120,000-130,000k per second which lowered the build/sync from 18 hours to 6 hours :)

Must admit that with my current configuration, most of the 8gb of RAM is swallowed as a disk cache but then the only other things on the system running are SabNZBd and Transmission so I'm not really worried. And I can always slam in another 8gb for reasonably cheap :D
 
I was getting roughly 30,000k per second when doing the initial build/sync of the array until I played with the tuning parameters in sysctl. Once I'd done that I was hitting 120,000-130,000k per second which lowered the build/sync from 18 hours to 6 hours :)

Ok, do tell what you did =P I have very little mdadm experience. Mine seems mostly CPU limited though. Not sure if your tweaks helped I/O wise or CPU too? I know I am only syncing at about half of what the controller should be capable of.
 
Cheeky ;) The CPU will be a limiting factor however you can help with giving the IO a boost (or in my situation it helped) as it was the only thing I could except overclock the processor:

Tinker with the following:

echo 50000 > /proc/sys/dev/raid/speed_limit_min
echo 200000 > /proc/sys/dev/raid/speed_limit_max

The effect should be instantaneous and should help give you a nice boost in speed as your altering the default values of the RAID IO speed limits that are shipped in your specific distro. Now I'm not sure what the default values are on sabayon but on Ubuntu both of those are set at 1000 which is pretty dire. With these as well, if you're not physically next to the machine I'd be wary of trying them as it could freeze the machine....

There are other tricks you can try like enabling the bitmap whilst syncing and then disabling once its finished however I've never seen any advantage with that option.
 
Ah ok. I was actually aware of that (or the similar thing?) in /sys/block/mdX/md/sync_speed_min and sync_speed_max. I am guessing that does the same thing as I saw no difference when I changed it.

I guess I am just limited to what this old core 2 duo can do.
 
To be honest you can fiddle with everything from read-ahead settings on both the drives and array to disabling NCQ, stripe cache and size etc but they're all tweaks which could go either way :(

MDADM is fairly kind in what it does and shows you one way or another whether its worked. Wish things like Samba were just as kind rather than a pain in the backside :D
 
Not the greatest - not even in a case yet! (waiting for the lian-li pc-q25 to be released)

6 * 2tb Samsung 204ui's
1 * 120gb hdd
Asus mini-itx e350 board with 6 sata 3 ports
2 port sata card for 120gb disk
4gb ram

2011-08-24160320.jpg


2011-08-24160314.jpg
 
Last edited:
Hi,

I'm planning in the next months to build a 36TB storage sistem
for obvious cost reason the built wil be made step by step

As start component i think at

modo: Asus P6X58D Premiun (I already have it and will be turning of whe lga socket motherboard will come aout
ram: 6 Gb (I already have it)
Psu: seasonic 750-X (I already have it)
cpu: core i7 930
two raid controller adaptec 5805 (one in already in disponibilty)

case:

lian-li Pc-A70F

hard disk expanzion kit:
1x lian li EX-36A2

2x HD-532

Built

on each controller adaptec a will install 8x3TB in raid 6 (I will starf with one controller

for system I want to use 2x 2,5 harddisk in software raid 1

What do think?
Are the expansion kik compatible with the case?

regards

ugo
 
@ugo1 I think this is more a "showoff" thread and not the ideal place to ask for ideas etc. / Also: Once done - show photos or it didn't happen ;)
 
My first (real) 10TB+ server.

I just picked up 8 x 2TB WD20EARX drives to put in my server (to replace the 8 x 1TB drives), so admittedly its not actually done yet, but will be tomorrow I promise! (I will post pics too)

I have had over 10TB for a while but never had a setup that actually formatted a single partition of over 10TB.

Specs:

Intel Core 2 Quad Q6600 @ 2.7Ghz with Prolimatech Megahalems
Asus P5Q Premium
8GB Ram
Geforce 7600GT
Silverstone TJ07 case
Silverstone 1KW + 800W PSUs
Icy Dock 5 in 3 hotswap drive bay (x 2)
Dell PERC 5/i with BBU
USB 3.0 PCIe card (for backups)

Storage:
60GB Vertex 2 (OS, Server 2008 R2)
240GB 3.5" Vertex 2 (Virtual machines on Hyper-V)
300GB WD RE4 (Just for torrent VM)
1TB WD Green (For uhh "backups")
8 x 2TB WD20EARX RAID 5 (should format to about 12TB)

I'm going to build the array tomorrow once I have made sure I have everything backed up, I sure hope all the drives survive, wish me luck!
 
Just adding HDDs to my media storage server to total 10, here complete list of the build for now:

Total Advertised: 20TB
Total Available: ~14.2TB


FREENAS v8.02 Release AMD64
"Single ZFS Pool striped across a pair of 5x2TB RAIDZ1"
1x WD Caviar Black WD2001FASS
3x WD Caviar Green WD20EARS
6x WD Caviar Green WD20EARX
AMD Phenom II X2-555BE ~ X4-B55
Artic Cooling Freezer 64 Pro
Kingston 16GB DDR3-1333
BIOSTAR TA880GB+
SYBA SY-PEX40008 [ Sil 3124 using latest v6.6.00 BIOS ]
Onboard Realtek RTL8111E
Seasonic M12II-620Bronze [ has 9 sata power ports ]
NZXT Source 210 Elite
APC Back-UPS RS 800

img1924w.jpg


img1904u.jpg


img1914ck.jpg
 
Last edited:
Nice system ctantra :) Why 16GB of ram?

@ houkouonchi: 200TB of total storage? :eek: What is its usage? bluray backup?
 
Nice system ctantra :) Why 16GB of ram?

Because of FreeNAS ZFS ? That filesystem likes as much cache memory as possible :).

I made a FreeNAS system yesterday as well. Intel DH61CR, Pentium G620, 4GB DDR3, Sandisk 4GB USB key, 4xWD20EARS, 4 fans, Fractal Define R2 - system consumption is 35W in idle when all drives are in sleep using "camcontrol stop daX". I don't need that much memory, as i use it only as a iSCSI target. Even that 4GB is overkill for this :D :
memory1h.png
 
@Edel:

I put 16GB because of ZFS with RAIDZ1. Ideally is 1GB of RAM for every 1TB of storage. You can still use ZFS with less RAM, but performance will be affected. With less than 4GB ZFS disables pre-fetching by default, and it can slow down performance.

Besides, DDR3 is super cheap now, it's time to install as much RAM as possible. :D

Here the physical memory utilization copying Bluray ISO from my VAIO to Freenas:

copyblurayiso.jpg
 
Just adding HDDs to my media storage server to total 10, here complete list of the build for now:

Total Advertised: 20TB
Total Available: ~14.2TB


FREENAS v8.01 BETA4 AMD64
"Single ZFS Pool striped across a pair of 5x2TB RAIDZ1"
1x WD Caviar Black WD2001FASS
3x WD Caviar Green WD20EARS
6x WD Caviar Green WD20EARX
AMD Phenom II X2-555BE ~ X4-B55
Artic Cooling Freezer 64 Pro
Kingston 16GB DDR3-1333
BIOSTAR TA880GB+
SYBA SY-PEX40008 [ using latest v6.6.00 BIOS ]
Onboard Realtek RTL8111E
Seasonic M12II-620Bronze [ has 9 sata power ports ]
NZXT Source 210 Elite
APC Back-UPS RS 800

img1924w.jpg


img1904u.jpg


img1914ck.jpg
This is what my home media server is going to look like but I'll be using unRaid instead most likely. :)
 
Status
Not open for further replies.
Back
Top