Disk selection, what would you do?

Strahan

n00b
Joined
Oct 28, 2007
Messages
47
I bought 4 Western Digital green 1TB drives, WD10EACS. I have them running in a RAID5 array right now with an Areca controller. I've been having issues with write performance, but I haven't nailed it down to what the problem is yet.

Anyway, I only got 4 drives for my 16 drive case due to funds. Now that I have the rest of the funds, I'm ready to get the last 12. However in the interim I've read that the WD10EACS isn't a good choice for raid. Originally I figured OK, so it won't be the best but at $250 a drive I can deal with it. However now Newegg raised the price up to $270.. it's only $10 more to move to the Samsung HD103UJ Spinpoint drive that has lower seek times, 7200rpm and 32mb cache. Don't know if it also has raid issues, but it's faster at least.

Newegg said they'd take the 4 drives back, but with a $160 restock charge. So I can either buy 12 more WD10EACS drives for $3240 and be done or ship back the first 4, wait for the refund of ~$920 then buy 16 Samsung drives for $4480. Total costs to move to the Samsungs (inc'd the restock hit) would be $4640 vs a total of $4320 for the WDs.

What would you do? Is anyone familiar with those Samsung drives? Are they decent? Samsung doesn't seem to be a big name to me as far as drives, I usually go Seagate, WD or Maxtor so I don't know how reliable/good they are.

Thanks!
 
I would stick with the wd green drives, the performance of 16 drive array will be plenty already and having cooler running less power consuming drives is better then a bit more performance.
 
I'm not too familiar with the Samsung models, but I know a couple of things regarding WD. Yes, and non RE drive has problems with RAID as it will have the TLER bug. What this means is, when the controller cannot access the disk for more than 8 seconds, it drops it from the array. I don't know how often this occurs, but I simply do not trust using a non RE drive for an array.
 
you can use it to change tler to on or off, and while not all wd drives have tler in firmware most do. I'm fairly sure the green drives do but since he already has some it would be easy to grab the utility and check. I have wd3200aaks drives which are intended for desktop use but they have tler in firmware (off by default) and i was able to enable it using that utility.
 
Interesting. With that TLER tool, can I boot the DOS disk and run it while the drives are in the array connected via the Areca card or do I have to plug them one by one into the mobo SATA?

Thanks!
 
Update:

I turned off my server and got the boot disk ready. After realizing I neglected to spec a floppy drive into the file server, I wasted 20 minutes finding one (it HAD to be black of course heh) then finding out the friggin fdd port was like 20 miles from the fdd bay, got it all sorted out:

dsc02854.jpg


Booted up, ran tler-on and was a little confused when I saw this:

dsc02853.jpg


WD1600JS?? I have WD10EACS. Ahh, it's seeing the OS drive, which happens to be WD hehe. Whew. Guess you can't have it see the Areca controller and hit all the drives on it. Sooo, I pulled all the drives and one by one swapped them for the OS drive:

dsc02855.jpg


Booted back up and eventually got:

dsc02856.jpg


Well, that certainly looks promising! It gave no errors, it looks like it worked :) So now when I get the next 12 drives in I guess I'll just first have to boot each drive individually and set the TLER. Gonna be a PITA but much less PITA than restoring the array from tape when/if it craps hehe.

Thanks for the advice!
 
PS, interesting to note that it apparently worked on the WD1600JS as well. Actually, I wonder if maybe I should reboot one more time and toggle it back off for that drive, since it's the OS drive and not RAID. Dunno if having it on for that will be an issue or not.
 
yes you should disable tler on your boot drive, the utility doesnt recognize most raid cards so you have to connect the drives to onboard sata ports but it will hit all drives connected at the same time.
 
You are the first person to come back and show screen shots of that utility in action. Thanks. :)
 
Super Strahan!

I'm am in the exact same situation as you are in.
I got 4 WD10EACS drives and a Areca raid controller, and more WD10EACS drives on the way. I didnt put them in the raid yet, but I will sure enable TLER before i do.

You wrote you where having issues with write preformance in the original post.
Did you manage to fix that, or could the drives compability with the Areca controller be the cause of that?
 
What is the server being used for? It is probably worth it to spend the extra money to get sata drives that are meant for a production environment. They're a little more, but are likely designed to be what you're looking for.
 
I am rather curious about the EACS's performance as well. I am still torn between it and the samsung drives.
 
Well, that certainly looks promising! It gave no errors, it looks like it worked :) So now when I get the next 12 drives in I guess I'll just first have to boot each drive individually and set the TLER. Gonna be a PITA but much less PITA than restoring the array from tape when/if it craps hehe.

Thanks for the screenshots. Went ahead and created a bootable usb drive(using this guide) and the utility works fine on my WD3200JD and 160GB Raptor. Now I'm gonna go and get the 8x WD drives instead of Seagate. Gonna save myself $350, have lower power consumption, and put out less heat.

Also of note: I was considering the Samsungs myself, but a lot of them are cropping up with a master cylinder error.

The WD10EACS is $254 shipped from ZipZoomFly, they do a much better packaging job than Newegg.
 
I too have used the WDTLER app. Works good.

Question: I have five WD 500GB drives set up as RAID5 right now. Would I be able to add in one of those green 1TB drives to the array? If so would it be really risky in your opinion.. in terms of me losing all my data that is on the 500GBs?
 
If you happen to be running linux software raid you can partition it into two 500gb partitions and add both to the raid array. ;)
 
If you happen to be running linux software raid you can partition it into two 500gb partitions and add both to the raid array. ;)

Not a good idea, his array performance would tank, and if the 1TB drive dropped, he would lose the entire array.
 
Super Strahan!

I'm am in the exact same situation as you are in.
I got 4 WD10EACS drives and a Areca raid controller, and more WD10EACS drives on the way. I didnt put them in the raid yet, but I will sure enable TLER before i do.

You wrote you where having issues with write preformance in the original post.
Did you manage to fix that, or could the drives compability with the Areca controller be the cause of that?

No, actually I didn't. I have to actually retest my performance now that I've blown away the array and rebuilt it with the 16 drive R6. My first tests were 4 drive R5. I can't see how it would've changed though, if anything maybe gotten worse. I'll do the tests again and report back. Even if it is still bombing on performance, I can't complain in real world usage. It seems to operate fine, just the numbers suck. Of course, it's going to bother me until I figure it out hehe. Areca suggested removing the battery module and testing again. Once my current batch of FTP downloads are done (I'm working on filling up that array, hehe) I'll have to power it down and give that a shot too.


What is the server being used for? It is probably worth it to spend the extra money to get sata drives that are meant for a production environment. They're a little more, but are likely designed to be what you're looking for.

Data warehouse basically. Just storing a crapload of media and serving it out. I thought about upgrading to the "enterprise" version of the drives, but the cost difference was like $60 at the time and $60 x 16 = $960. I've already blown most of my budget on the rack and equipment, so I had to compromise there. My current plan is as I download stuff / rip stuff I keep track of it and when it hits 200gb of new content, I write it to a tape for archival. That way even if the consumer grade drives croak and I have to rebuild the array, I don't lose anything (except alot of time doing restores, hehe)


Sell me those 4 1TB drives :D

Sorry, they were being added into the new array with the new 12 I ordered :)


Thanks for the screenshots. Went ahead and created a bootable usb drive(using this guide) and the utility works fine on my WD3200JD and 160GB Raptor. Now I'm gonna go and get the 8x WD drives instead of Seagate. Gonna save myself $350, have lower power consumption, and put out less heat.

Also of note: I was considering the Samsungs myself, but a lot of them are cropping up with a master cylinder error.

The WD10EACS is $254 shipped from ZipZoomFly, they do a much better packaging job than Newegg.

That's a good price. Irks me that I paid $270 each to NewEgg. I KNEW I should've waited. Ah well, c'est la vie. Also yes, Newegg's packing for the drives wasn't the best. They gave me a big styrofoam drive tray and it had some peanuts thrown in around it. Not filled to the point of there being no voids between the stryo and the box walls. But I guess their logic was the styro is thick enough to compensate -shrug-. Came in handy to store a bunch of old drives too :)
 
No, actually I didn't. I have to actually retest my performance now that I've blown away the array and rebuilt it with the 16 drive R6.

Ok, thanks.

I just started to put together a raid with 8 WD10EACS's but i did find a problem with one of the drives that i didnt add to the raid yet, and i not sure i dare to.

I found that the drive froze (and my computer too) for about minute then checking PAR info. I ran the WD LifeGuard diagnostics on it and it found some bad sectors and offered to fix them and reported that it did so successfully. The thing is, the drive still hangs, but now only for about 15 secs when i check the same par info, and i didnt enable TLER on that drive yet.
I guess i can't RMA the drive as it is OK, but im afraid to use it in the RAID. I guess it relocated some bad sectors, but is able to recover data from some due to the slow access of that area. If i enable TLER it will proberly give up earlyer, but can't i somehow make sure it will not use the "slow" sectors, or am i worrying too much here?

What do you guys think? thanks
 
RMA the drive to wherever you bought it from and get a new one. Newegg will do it, even if the drive hasn't failed. They don't charge a restocking fee if you do an exchange. This would be a legitimate RMA too.
 
Ditto. I'd RMA it. Hell, I'd RMA it even if I had to pay the fee, just to have peace of mind. When I considered returning the WDs for Seagates I told them right up front there was nothing wrong with them, just changed my mind and they still would let me return. So it doesn't matter if it works or not. I just had to pay a restock fee. As Dew said though, you should be OK there too for an exchange.

Speedwise I ran the performance test tool and for a 1gb test file, unbuffered 100% write, 100% random access I got a whopping 16.2MB/s. Changed it to 100% sequential and it went to 93.4MB/s. Haven't tried disconnecting the cache battery yet, I'll have to get to that tomorrow.
 
Sound advice.. I'll RMA it. it has been running extended test now for 10 hours. It's time to see if WD's service is as good as Maxtors used to be.

Speedwise I ran the performance test tool and for a 1gb test file, unbuffered 100% write, 100% random access I got a whopping 16.2MB/s. Changed it to 100% sequential and it went to 93.4MB/s. Haven't tried disconnecting the cache battery yet, I'll have to get to that tomorrow.

I really hope you will be able to fix this problem. I'll post my R/W speeds monday, when the last couple of WD10EACS's arrive. Hopefully it's not a general Areca problem, as I will have it as well then. I have 7 Samsung T166's in the raid as well, can't wait so see how the WD's will match up against them.
 
Removing the battery btw only marginally increased speeds. With the battery backup off, random write speed went up to 17.3MB/s and seq write went to 102.1MB/s. These figures are all with cache disabled too. It seems to operate fine as far as useability though. Copying files to / from the array seems to go pretty painlessly. I'll be interested in seeing what your numbers look like.

The testing tool I used was Passmark Performance Test. I launched it then went to advanced, disk then created one thread with a 1gb test file, 8192 byte block size, Win32 API (uncached), 100% writing, then 100% random for one test and 100% sequential for the other. Just in case you want to use the same program to have a valid comparison :)
 
So I was just searching for people using these drives in a RAID 5 array and I was just going to pull the trigger and buy 5-6 of these. Just to let you guys know you can get these from here for only $220 (apply 5% coupon also). It is in an external esata/usb enclosure, but inside is a WD 1tb hdd with 3 year warranty. Pretty good deal though.
 
The testing tool I used was Passmark Performance Test. I launched it then went to advanced, disk then created one thread with a 1gb test file, 8192 byte block size, Win32 API (uncached), 100% writing, then 100% random for one test and 100% sequential for the other. Just in case you want to use the same program to have a valid comparison :)

Hi, sorry for the late reply but rebuilding raid and volumeset took forever and a day.

Well, I did a TLER-ON on 5 WD10EACS and added 3 and then 2 to the raid, testing every time.

Passmark Performance Test (standard test)

With 7 Samsung HD501LJ 500gb (3Tb)
Sequential read: 263 mb/s
Sequential write: 226,3 mb/s
Random seek - RW :165 mb/s

With 1 WD10EACS 1tb (non raid)
Sequential read: 48,3 mb/s
Sequential write: 70,1 mb/s
Random seek - RW :144,5 mb/s

With 3 WD10EACS 1tb (2tb raid)
Sequential read: 110,5 mb/s
Sequential write: 114,8 mb/s
Random seek - RW :40,0 mb/s

With 5 WD10EACS 1tb (4tb raid)
Sequential read: 214 mb/s
Sequential write: 216,6 mb/s
Random seek - RW :19,1 mb/s

I forgot to test with your special setting in Passmark, but as you can see i'm getting fair transfer rates but with very low random seek + R/W's just as you. I will be adding 2 more WD10EACS to the raid as some point soon, but i fu*ked up and set MBR on the initial raid so i have to move all data off and convert the volume. Lesson learned, I will never dring and install raids ever again :p
 
The testing tool I used was Passmark Performance Test. I launched it then went to advanced, disk then created one thread with a 1gb test file, 8192 byte block size, Win32 API (uncached), 100% writing, then 100% random for one test and 100% sequential for the other. Just in case you want to use the same program to have a valid comparison :)

Ok, i tryed this as well now, and I'm getting

100% write, 100% random, over 60 secs, 6,3 mb.
100% write, 100% Sequential, over 60 secs, 100,7 mb.
 
My 8x WD10EACS will be here on Tuesday. I should have everything done by Sunday as far as benchmarks go.

I'll test bonnie++ on my current 8xWD3200JD/SD after I back up the data and wipe the filesystem. Then I will run the tests on the 8xWD10EACS(TLER-ON) after the R5 finishes building and with a fresh filesystem.

If anyone else has some tests they would like to see(Linux only, please), let me know.
 
My 8x WD10EACS will be here on Tuesday. I should have everything done by Sunday as far as benchmarks go.

I'll test bonnie++ on my current 8xWD3200JD/SD after I back up the data and wipe the filesystem. Then I will run the tests on the 8xWD10EACS(TLER-ON) after the R5 finishes building and with a fresh filesystem.

If anyone else has some tests they would like to see(Linux only, please), let me know.

hdparm?
 
I didn't record the hdparm results for the 8xWD3200 but they were around 90MB/sec IIRC.

I enabled TLER on all the WD10EACS drives without issue.

Here is a HDTach comparison of a single empty WD3200SD versus a single empty WD10EACS:
WD10EACS-vs-WD3200.GIF


I did a bonnie++ run on the WD3200 array before breaking it, I'll post the results together tomorrow after the array finishes initializing.
 
Well, it will be a while before you get those numbers from me.

Looks like I got a bad batch (all drives were the same week). The array kept failing after a few hours of initialization, so I shut the whole computer off, when I brought it back up, the drives took a VERY long time to be recognized. Once in linux, the driver was failing to recognize the drives (channel would attempt to start, then fail). I hooked a drive to my desktop again to see if formatting the drive and letting the array re-import the drive would do the trick. That is when I found that the drives were no longer spinning up. They will spinup on the RAID controller, but that's about it. I hooked one of my old 320's up and it works fine.

Verdict: 8 dead WD10EACS. Yay for RMAing $2000 worth of hard drives. ZZF is going to love me, not.
 
Tough luck, what is the production date on those drives?

I'm up to 8 WD10EACS drives now, and they are all working nicely minus one what had 1 bad sector from the factory.
 
I am no expert, but could it be that the TLER utility does not work correctly on the WD10EACS? I mean TLER was an issue a while back, but I don't have information about the current gen drives. Have you tried reverting the drives?

In the end, maybe using that util broke them?
 
Back
Top