Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I really like that ^^^^ case,
Pro's :
Air in over hdd's and pulled out the back TOP
PSU on the bottom
Yep... very good air flow, make the HDD running cool.
It would be nice if they mentioned what those warmer or cooler temperatures were. Interesting none the less.
I like your case. I assume it comes in red or metallic grey too? Assuming you just plan to duct the A/C instead of having separate fans?Ok guys. Time for me to take the lead again. Should have them installed and running live (and some new pictures) within a week or so.
Here is what 90TB of retail disks looks like =) Picture taken right outside of frys:
Also City of Industry fry's are a bunch of lame asses. I had to go to two fry's as none of them had 30 in stock. Well after getting 18 at one I luckily called city of industry (to have them put some aside) just to have them tell me they wouldn't sell me more than 2 at a time.
They told me some BS like it was in the fine print in the newspaper or some shit but I know its crap. Frys has done limit quantity 2 in deals before and in that case you can't checkout more than 2 at once. One fry's was nice and let me do it in several transactions in the past but this time at the other two frys I bought 18 and 12 drives at both had no problems checking out 18 and 12 disks all on one transaction.
I hate city of industry fry's!
root@prometheus:/media/store# hdparm -tT /dev/md0
/dev/md0:
Timing cached reads: 4754 MB in 2.00 seconds = 2376.87 MB/sec
Timing buffered disk reads: 3260 MB in 3.00 seconds = 1085.49 MB/sec
root@prometheus:~$ dd bs=1M count=40000 if=/dev/md0 of=/dev/null
40000+0 records in
40000+0 records out
41943040000 bytes (42 GB) copied, 36.6576 s, 1.1 GB/s
praetorian, do you hate your hard drives ? I just ask because Shinobi does have a rather limited intake and putting 12 hard drives in there is crazy. I have 10 drives in my R3 and even they are in 40-45C region, with 2 fans directly blowing at them with relatively good airflow.
I am surprised to see the temperatures you describe, either you live in very cold country, or the hard drive sensors mut be bad. Air intake of Shinobi is extremly limited according to most reviews, but maybe it works good enough in your case.
And 5-to-3 bays ? I had once a 4-to-3 bay, sold it ASAP. 45C-50C temperatures were standard in that thing.
The speed of this thing is incredible considering that I've come from a hobbled together WHSv1 system that used PCI controllers. The following was a quick DD test on the system once the array had been properly setup:
Code:root@prometheus:/media/store# hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 4754 MB in 2.00 seconds = 2376.87 MB/sec Timing buffered disk reads: 3260 MB in 3.00 seconds = 1085.49 MB/sec root@prometheus:~$ dd bs=1M count=40000 if=/dev/md0 of=/dev/null 40000+0 records in 40000+0 records out 41943040000 bytes (42 GB) copied, 36.6576 s, 1.1 GB/s
Overall definitely VERY happy with the upgrade and the money spent to get to this even though its not as big as some of them here
Backups will be handled by my previous server being rebuild to run Ubuntu again but with smaller arrays using the existing drives
Sequential read:
sabayonx86-64 ~ # dd bs=1M count=25000 if=/dev/md0 of=/dev/null
25000+0 records in
25000+0 records out
26214400000 bytes (26 GB) copied, 46.3462 s, 566 MB/s
Sequential Write:
sabayonx86-64 ~ # dd bs=1M count=10000 if=/dev/zero of=/dev/md0
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 37.8154 s, 277 MB/s
CPU usage while doing sequential read:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16511 root 20 0 9292 1752 608 R 98 0.2 0:32.57 dd
292 root 15 -5 0 0 0 S 17 0.0 0:08.01 kswapd0
CPU usage while doing sequential write:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16126 root 15 -5 0 0 0 R 82 0.0 0:49.14 md0_raid5
16451 root 20 0 9292 1752 608 D 51 0.2 0:15.95 dd
Direct I/O sequential read (reduces CPU usage):
sabayonx86-64 ~ # dd bs=1M iflag=direct count=25000 if=/dev/md0 of=/dev/null
25000+0 records in
25000+0 records out
26214400000 bytes (26 GB) copied, 35.5604 s, 737 MB/s
CPU usage while doing direct I/O read:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16671 root 20 0 9292 1752 608 D 7 0.2 0:00.95 dd
1 root 20 0 3720 608 504 S 0 0.1 0:01.51 init
2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd
mdadmbrick ~ # df -H
Filesystem Size Used Avail Use% Mounted on
rootfs 31G 7.5G 24G 25% /
udev 11M 2.8M 7.8M 27% /dev
/dev/md126 31G 7.5G 24G 25% /
rc-svcdir 1.1M 144k 906k 14% /lib64/rc/init.d
tmpfs 523M 209k 523M 1% /dev/shm
/dev/sda1 31M 28M 1.8M 94% /boot
/dev/md127 39T 4.8G 39T 1% /data
mdadmbrick ~ #
mdadmbrick ~ # uptime
07:47:52 up 2:19, 8 users, load average: 1.47, 1.91, 2.79
mdadmbrick ~ #
top - 07:48:09 up 2:19, 8 users, load average: 1.44, 1.88, 2.77
Tasks: 176 total, 4 running, 172 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.5%us, 53.4%sy, 0.0%ni, 19.9%id, 0.0%wa, 0.0%hi, 26.2%si, 0.0%st
Mem: 1020840k total, 746680k used, 274160k free, 12k buffers
Swap: 0k total, 0k used, 0k free, 139652k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5266 root 20 0 0 0 0 R 74 0.0 90:57.69 md127_resync
5264 root 20 0 0 0 0 R 21 0.0 31:15.40 md127_raid6
4466 root 20 0 0 0 0 R 21 0.0 0:55.28 kworker/0:2
4212 root 20 0 0 0 0 S 21 0.0 0:55.49 kworker/0:1
4688 root 20 0 0 0 0 S 13 0.0 0:24.30 kworker/1:3
28602 root 20 0 0 0 0 S 13 0.0 3:11.42 kworker/1:4
21243 root 20 0 59976 4628 1284 S 1 0.5 1:23.54 udisks-daemon
mdadmbrick ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md127 : active raid6 sdb2[0] sdp2[14] sdo2[13] sdn2[12] sdm2[11] sdl2[10] sdk2[9] sdj2[8] sdi2[7] sdh2[6] sdg2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
38063464192 blocks super 1.0 level 6, 128k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
[=>...................] resync = 8.2% (241248952/2927958784) finish=2202.1min speed=20333K/sec
md126 : active raid6 sdb1[0] sdp1[14] sdo1[13] sdn1[12] sdm1[11] sdl1[10] sdk1[9] sdj1[8] sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1]
29996928 blocks level 6, 128k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
unused devices: <none>
mdadmbrick ~ #
I still think that with raid6 mdadm software raid really does use quite a bit of CPU and can bog down the system. I can definitely see why hardware raid is worth while just for the parity off-loading. ZFS did seem to be much faster though so I guess linux mdadm's algorithms aren't as efficient or something.
I was getting roughly 30,000k per second when doing the initial build/sync of the array until I played with the tuning parameters in sysctl. Once I'd done that I was hitting 120,000-130,000k per second which lowered the build/sync from 18 hours to 6 hours
That's cute.
^^ why do you need a 120gig drive ? whats wrong with running os off usb stick ?
Nice system ctantra Why 16GB of ram?
This is what my home media server is going to look like but I'll be using unRaid instead most likely.Just adding HDDs to my media storage server to total 10, here complete list of the build for now:
Total Advertised: 20TB
Total Available: ~14.2TB
FREENAS v8.01 BETA4 AMD64
"Single ZFS Pool striped across a pair of 5x2TB RAIDZ1"
1x WD Caviar Black WD2001FASS
3x WD Caviar Green WD20EARS
6x WD Caviar Green WD20EARX
AMD Phenom II X2-555BE ~ X4-B55
Artic Cooling Freezer 64 Pro
Kingston 16GB DDR3-1333
BIOSTAR TA880GB+
SYBA SY-PEX40008 [ using latest v6.6.00 BIOS ]
Onboard Realtek RTL8111E
Seasonic M12II-620Bronze [ has 9 sata power ports ]
NZXT Source 210 Elite
APC Back-UPS RS 800
This is what my home media server is going to look like but I'll be using unRaid instead most likely.
I will do that when it happens. Right now I need to build my gaming computer for Battlefield 3.Wow..., just let me know when it is done. What case do you plan to use?