ARECA Owner's Thread (SAS/SATA RAID Cards)

Yes you can (on reads). if only reading off four disks would be 150 megabytes/sec which the disks are *NOT* capable of. Most raid controller I have seen will only read off half the disks in raid10 but areca will read from all the disks.

this tale is even more bizarre than i thought - the same arc-5020 box i was posting about before, now hooked via a port multiplier capable jmicron expresscard that lets me see both volumes on it.

dr_U_dr_Mcopy.jpg


e.g. - the volume at the heads of the drives U: has the slow reads that i'd complained before, but the one at the tails, M: is ok. and M is 95% full now, U about 80%. i had thought that one of the HDs has some odd problem, but this kind of eliminates that. has to be something that the controller does, right?
 
Hello all.
I have an ARC 1210 running a mirrored array for my OS drive. Can I add 2 more drives and migrate to RAID 1+0? It doesn't matter too much if the system is down for a bit while this process goes on. Reinstalling isn't really an option for me at this point in time so I'm only going to go RAID 10 if I can do this.

Thanks for any input.
 
Hi

Did plug in 4x Samsung Eco F4 2TB HD204UI on my 1680x , atm i am creating up a Raid5 and i am at 96% Initiating.

And wile i was copying in some data, I did get a few (4-6x per disk) Timeout on the some of the disk's.
So i gues these disk ain't that good for Areca 1680 :(

Gona do some more testing after the initializing is done.


Got 6x Samsung F3 Eco 2TB and they got so far 0 timeout after 2 month, way did they stop making them :(
And also 8xWD GP 1TB (old model) that works greate.

Is there any lowpower disk that work with 1680+HP sas expander?? or are the Hitachi disk the only one that works??

Using :

Areca 1680x
+HP sas expander
 
Crap! I was also aiming for the F4.

Merlinen, if you find any good drives please let me know. I was looking at the Hitachis from Dustin but it would be nice with a low power drive.

Did you get your 1680 from fyxim? Trying to find a reseller in Sweden but I think I will just get it in the states in the end...
 
Hello all.
I have an ARC 1210 running a mirrored array for my OS drive. Can I add 2 more drives and migrate to RAID 1+0? It doesn't matter too much if the system is down for a bit while this process goes on. Reinstalling isn't really an option for me at this point in time so I'm only going to go RAID 10 if I can do this.

Thanks for any input.

You are really better off doing from scratch. The migration is probably going to take longer than file transfer. Since its an OS drive too, I'd imagine it might have some issue with migration. (migration simply clone the same image to the new RAID set/drive) This means you also need to expand volume set through 3rd party software like partition magic and its not gonna work well on OS partition with MBR.
 
Last edited:
You are really better off doing from scratch. The migration is probably going to take longer than file transfer. Since its an OS drive too, I'd imagine it might have some issue with migration. (migration simply clone the same image to the new RAID set/drive) This means you also need to expand volume set through 3rd party software like partition magic and its not gonna work well on OS partition with MBR.

Right! I forgot. The size of the array will double. Well, that rules that out. I have no desire to reinstall the OS for my server ATM.

Thanks for the heads up. Don't know what the hell I was thinking.
 
I'm struggling with expanding a volume set. I've gone from an 8x2tb disk raid 6 array (12,000GB capacity) to an 11x2tb raid 6 array (18,000GB capacity). Raid expansion went fine, but I cannot seem to get the areca card to 'action' the volume set expansion.

Each time I change the volume set to be 18,000GB and then initiate the modification, the card responds 'success' but nothing actually changes.

I had the same problem when I went from 6 to 8 disks, but after trying several times the card eventually did start to modify the volume set.

Here's a screen shot of what I see prior to modifying the volume set - is the red color text (18,000gb) indicating some kind of problem? I tried changing the stripe size from 64 to 128 but that didn't seem to help.

1680-ix8-volume-expand-problem.png


Any ideas?
 
I have the areca 1880i card. Does areca make an expander for this specifically or is it the same as for the older 1680 series. I was going to go with the hp sas expander as it seems verified to work now but just wanted to make sure I wasn't overlooking anything.
 
I'm struggling with expanding a volume set. I've gone from an 8x2tb disk raid 6 array (12,000GB capacity) to an 11x2tb raid 6 array (18,000GB capacity). Raid expansion went fine, but I cannot seem to get the areca card to 'action' the volume set expansion.

Each time I change the volume set to be 18,000GB and then initiate the modification, the card responds 'success' but nothing actually changes.

I had the same problem when I went from 6 to 8 disks, but after trying several times the card eventually did start to modify the volume set.

Here's a screen shot of what I see prior to modifying the volume set - is the red color text (18,000gb) indicating some kind of problem? I tried changing the stripe size from 64 to 128 but that didn't seem to help.

http://xenway.com/_alex/1680-ix8-volume-expand-problem.png

Any ideas?
You need to change the value in the "volume capacity" field to 18TB. Changing the other values really won't make a difference.
I have the areca 1880i card. Does areca make an expander for this specifically or is it the same as for the older 1680 series. I was going to go with the hp sas expander as it seems verified to work now but just wanted to make sure I wasn't overlooking anything.
They're coming out with one, but the HP one is far cheaper. It definitely works with the 1880i as I'm currently running 2 HP SAS expanders on mine.
 
Well, don't know why it suddenly started working, but now it is resizing the volume. Exactly the same steps as before (when it claimed it was modifying the volume, but really wasn't). I did reboot my machine so perhaps that sorted out the gremlins. Smells of some buggy code somewhere in there... hopefully just in the web gui.
 
"WebGUI" is vague - were you using it through ARCHTTP (the proxy program you install) or directly to the card with an ethernet cable and an IP address configured? Reason I ask is because I find the direct connect to card with an ethernet cable to be way more bulletproof. The WebGUI via ARCHTTP is just plain screwy sometimes, with issues like you describe. I've had my own experiences with clicking "Submit" and it doesn't take, and have to refresh the page and do it again. Thus I don't really use ARCHTTP except in a pinch. When you consider the WebGUI is really just a middleman and overlay for passing console commands to the card, its amazing it works at all.
 
Now i havde done some more testing.

Did expand my Raid 5 of 4x Samsung F4 2TB disk's to 6x and did move some data wile it was expanding and 0 timesouts so far, also done some restarts/power on/off.

I did get the timeout's the first 4-5h but after that i have got 0 timesouts so dunno if it was you some temporery fail or?
 
@odditory:
WebGUI as in the ArcHTTP interface (web browser front end for card).

Buggy is the word I would use.
 
@odditory:
WebGUI as in the ArcHTTP interface (web browser front end for card).

Buggy is the word I would use.

Try the out of band web interface by plugging in an ethernet cord to the back of the card and access it that way. Its much more reliable.
 
I've also noticed a few times when the web UI didn't show the same information as the command line interface to it. It had to do with arrays moved from other cards though, not anything created on with the web interface itself.
 
Now i havde done some more testing.

Did expand my Raid 5 of 4x Samsung F4 2TB disk's to 6x and did move some data wile it was expanding and 0 timesouts so far, also done some restarts/power on/off.

I did get the timeout's the first 4-5h but after that i have got 0 timesouts so dunno if it was you some temporery fail or?

I have 4 of those hooked up as well, but for some reason my performance doesn't go above 30 or so MB/s once I run out of cache. Also the processor load and memory load on the fileserver rise significantly while copying. I don't mind some extra load, but I do mind extra load and crappy performance... Areca support hasn't replied yet either.
Ordered 5 Hitachi 7k2000 drives to test those, to see if maybe the drives are the problem.
 
I have 4 of those hooked up as well, but for some reason my performance doesn't go above 30 or so MB/s once I run out of cache. Also the processor load and memory load on the fileserver rise significantly while copying. I don't mind some extra load, but I do mind extra load and crappy performance... Areca support hasn't replied yet either.
Ordered 5 Hitachi 7k2000 drives to test those, to see if maybe the drives are the problem.

Strange, i have moved litel over 7 TB to the array and what i have seen it have keept a speed of around 150-250MB/s but i think it haven't got any higher because of the source disk i copy from ain't that fast.
 
For those of you wondering about 3TB drives in the 1680/121X series...... Areca already has >2TB drive support included in the current firmware. 3TB drive testing with actual hardware doesnt start until next week.

Have to admit I was quite impressed with this, they could have left the owner of older cards out in the cold at this point.
 
Hi All,

I finally received my Areca 1880 card and have it connected with an HP SAS expander and 11 x 2TB Hitachi drives. Unfortunately, I am having some trouble getting it up and running. I was able to create a single RAID6 raidset/volume without any issues. Then, at first, I did a background initialization and booted in Server 2008 R2, did a quick format and started to try to copy data to it (perhaps a mistake).

In any case I left it on overnight copying data and do the background initialization and in the morning the server was unresponsive at 9:48 (as you will see in the log), read errors were reported in the areca log and then it said it said the volume was degraded and that the disk in slot 5 had failed. Several reboots later, I was able to delete the raid set and volume and re-create. The disk and slot 5 was available and not listed as failed and I was able to add it to the new raid set. This time I chose to do a foreground initialization and I let it fully complete. Once it was done, it appeared in Server 2008 R2 disk management and I started a full format of the drive. This sat for a while with no status % indication and then the alarms went off and again in the event logs there are read errors with slot 5, then the volume/raid set is degraded and it claims that the device in slot 5 is failed. After a reboot in RAID Set Hierarchy it shows the hard drive in slot 5 as "free" and not in the volume/RAID set (the volume state is still listed as degraded). The drive status is then "normal" and not "failed".

Any ideas about what might be going on here? Somehow I really doubt that the disk is failed. If it was a failed disk it wouldn't it have issues being added into the RAID array again and show issues during initialization? Do I just need to upgrade the firmware on this drive? Below is a picture of my areca log:

706uS.jpg
 
Last edited:
run a Hitachi DFT (drive fitness test) on that drive in slot #5 and see what that reports. just because a drive is able to finish a raid initialization doesn't mean its not having issues. the signature of a physical/mechanical problem is exactly that kind of randomness.

might also be a flaky cable (I ran into that one time) but I doubt it.
 
The Hitachi DFT is the bane of my existence. As per usual I can't get it to see any drives on my Intel S3420GPLX motherboard (or on the Areca/SAS expander). In fact it says in the documentation "SATA controllers based on Silicon Image Promise and Intel chipsets are not currently supported"; handy.

--Edit--

I was able to jury-rig it up to another machine that has a supported motherboard and I now have the advanced test running on it; my guess is that it will take at least 8 hours or so. So, assuming that it passes the tests and no errors are found, what should I look to next? Firmware update of the drive (seems like a slim chance of being an issue, considering the other 10 are Hitachi's which probably have a varying level of FW and all don't have an issue). I was thinking that I might need to look into controller configuration issues (APM settings, SES2 etc) but it seems like more than 1 drive would consistently have problems if it was some sort of overall controller card settings, right? I have SES2 enabled and all the APM settings at the defaults except I have spin down idle set to 60 minutes.

Looking on back on the screenshot I have posted there, I now notice that there was a time out on slot 7 as well, but to my recollection I never saw the disk in slot 7 marked as failed, as disk 5 has been twice. I am not sure if that means anything.

Somehow I don't think it is a problem with the cable or port on the back plane. That area had been previously filled with another drive (using the same slot and cables) without any issues.

While the drive is being tested I deleted and re-created the RAID set with just 10 drives and I am doing another foreground initialization. I will see tomorrow if it has issues formatting this volume after it is available in Server 08 R2.
 
Last edited:
I am curious what the info for the drive looks like. This can be had with the cli (disk info drv=X) or by clicking the drive on the main status page under the web interface. Here is an example:

Code:
livecd ~ # ./cli64 disk info drv=1
Drive Information
===============================================================
Device Type                        : SATA(4433221103000000)
Device Location                    : Enclosure#1 Slot#1
Model Name                         : ST31500341AS
Serial Number                      : 9VS45XXX
Firmware Rev.                      : CC1H
Disk Capacity                      : 1500.3GB
Device State                       : NORMAL
Timeout Count                      : 0
Media Error Count                  : 0
Device Temperature                 : 28 C
SMART Read Error Rate              : 114(6)
SMART Spinup Time                  : 100(0)
SMART Reallocation Count           : 100(36)
SMART Seek Error Rate              : 68(30)
SMART Spinup Retries               : 100(97)
SMART Calibration Retries          : N.A.(N.A.)
===============================================================
GuiErrMsg<0x00>: Success.

I would compare those values between disk 5 and one of your other drives.

AFAIK when the controller gives a read error it means that the drive ran into a bad sector and the disk gave up trying to recover it (pending re-allocation) and thus the drive reported back it couldn't read the data.

Timeouts just mean the drive is unresponsive. Assuming the HP SAS expander and disks are compatible with the controller I would guess the drive just ran into a bad sector and it took a while to re-allocate the sector and thus it locked up the card for a while.
 
Well as I expected, it completed the DFT in the other machine with 'Operation completed successfully' Disposition Code = 0x00.

I am going to leave it in the secondary machine and try a full format it there and see if it runs into any errors. In the meantime I am trying a full format of the RAID6, 10 drive array I have (IE without the POSSIBLY 'questionable' drive). I am going to let that run today while I am at work.

@houkouonchi
I will let the formats above finish (if I can stand waiting for the 10 drive RAID6 volume full format) and then throw the drive back into the norco and compare the info for the drive to the others and see if there are any major differences.

I guess if the formats both finish without any issues, I will try taking a known 'good' drive that hasn't had any issues in one of the other norco slots and place it in the slot where the drive that had issues was and try to simulate the process that got it to fail again. I wish I could think of a quicker way (I hate having to wait for another 6 hours for an additional full initialization and then however long it takes to try to format) but that may be the only way to eliminate variables.

---EDIT----

I am a bit puzzled now. I had the 'troublesome' disk in a different machine all day doing a full format (it is at 12%) and it hasn't generated any errors, failed, stopped formatting etc. The RAID6 array has also been doing a full format without the 'slot 5' drive and is currently at 2% without any issues, failures etc. Last time the format had issues they started before it even reached 1% formatted so I feel like if I waited it would probably (eventually) succeed. This points to some issue with the slot (backplane, cable to the backplane, backplane connector, etc) or with the drive. I am going to halt both formats, put the 'questionable' drive back in and look at the disk information for it to see if it is any different than the others.

I am not sure what I will do exactly next.
 
Last edited:
@houkouonchi

Here is the information for the drive that has had issues:
Code:
Device Type     SATA(5001438006223784)
Device Location     Enclosure#2 Slot#5
Model Name     Hitachi HDS722020ALA330
Serial Number     SERIAL NUMBER
Firmware Rev.     JKAOA3EA
Disk Capacity     2000.4GB
Current SATA Mode     SATA300+NCQ(Depth32)
Supported SATA Mode     SATA300+NCQ(Depth32)
Error Recovery Control (Read/Write)     Disabled/Disabled
Disk APM Support     Yes
Device State     Normal
Timeout Count     0
Media Error Count     0
Device Temperature     42 ºC
SMART Read Error Rate     100(16)
SMART Spinup Time     118(24)
SMART Reallocation Count     100(5)
SMART Seek Error Rate     100(67)
SMART Spinup Retries     100(60)
SMART Calibration Retries     N.A.(N.A.)

Here is the information from a working drive:
Code:
Device Type 	SATA(5001438006223785)
Device Location 	Enclosure#2 Slot#6
Model Name 	Hitachi HDS722020ALA330
Serial Number 	SERIAL NUMBER
Firmware Rev. 	JKAOA3EA
Disk Capacity 	2000.4GB
Current SATA Mode 	SATA300+NCQ(Depth32)
Supported SATA Mode 	SATA300+NCQ(Depth32)
Error Recovery Control (Read/Write) 	Disabled/Disabled
Disk APM Support 	Yes
Device State 	Normal
Timeout Count 	0
Media Error Count 	0
Device Temperature 	55 ºC
SMART Read Error Rate 	100(16)
SMART Spinup Time 	119(24)
SMART Reallocation Count 	100(5)
SMART Seek Error Rate 	100(67)
SMART Spinup Retries 	100(60)
SMART Calibration Retries 	N.A.(N.A.)
 
I am a bit puzzled now. I had the 'troublesome' disk in a different machine all day doing a full format (it is at 12%) and it hasn't generated any errors, failed, stopped formatting etc. The RAID6 array has also been doing a full format without the 'slot 5' drive and is currently at 2% without any issues, failures etc. Last time the format had issues they started before it even reached 1% formatted so I feel like if I waited it would probably (eventually) succeed. This points to some issue with the slot (backplane, cable to the backplane, backplane connector, etc) or with the drive. I am going to halt both formats, put the 'questionable' drive back in and look at the disk information for it to see if it is any different than the others.

I am not sure what I will do exactly next.

the ONLY way to go about it is process of elimination. try a different drive, backplane, cable, port on the expander, etc. i've had scenarios with each one of those variables were the culprit after I wasted lots of time assuming other culprits. most recently I had an areca 1680ix-24 card where "slot #9" went bad.

personally I wouldn't waste time doing a full format after a full array initialization - redundant. quick format is fine. i realize you're trying to exercise all the drives to see if one drops but to me its unnecessary wear & tear.

keep trying to narrow down the issue and see if the problem follows the drive/backplane/cable/whatever.
 
the ONLY way to go about it is process of elimination. try a different drive, backplane, cable, port on the expander, etc. i've had scenarios with each one of those variables were the culprit after I wasted lots of time assuming other culprits. most recently I had an areca 1680ix-24 card where "slot #9" went bad.

personally I wouldn't waste time doing a full format after a full array initialization - redundant. quick format is fine. i realize you're trying to exercise all the drives to see if one drops but to me its unnecessary wear & tear.

keep trying to narrow down the issue and see if the problem follows the drive/backplane/cable/whatever.

Very true; I just need to keep moving drives around until I see some consistency of failure (whether it is drive/slot in the norco etc). For the moment, I put the drive that had "failed" back in the server and swapped hot swap drive bays with another drive. That way if the drive that had "failed" has an issue again I can be reasonably sure it is an issue with the drive itself but if the drive I moved that WAS working now starts to 'fail' I know it is that backplane connector/cable/port on the areca card.

I rebuilt the array again and am in the process of doing another full initialization. I am glad you aren't in favor of trying to do a full format; that was a pain in the ass and would have taken days to finish. You are right, I WAS trying to 'exercise' the drives but I was hoping it wasn't the only way to feel confident the setup was good or not. Once it finishes the initialization I am going to just to a quick format, setup shares and copy data to it overnight or something. At the least it will take less time and at best I don't have any issues again and then my data migration back to the new box is partially complete.
 
Well, after contacting Areca support with no reply whatsoever, I e-mailed my supplier. He sent me an RMA number right away, as he suspects something wrong with the card. Sending it off tonight, fingers crossed!
 
@CollieFjep, I would be interested what your results are; maybe I need to return my card as well. I need to do some further testing but my array performance (using Hitachi 2TB drives), an Areca 1880i + HP SAS expander appears to be somewhat lacking as well.

Copying from the array to itself I only get 30-60MB/sec which seems really low to me.

Copying from a machine on the network to the array I literally get 2MB/sec - 6MB/sec which is atrociously bad. On the old WHS setup I would get 30-50MB/sec on a regular basis. If I copy from a client machine to the C:\ drive on the server (NOT on the array) I get normal performance; 50-60MB/sec.

What is odd though is if I start the copy to the new server from another machine FROM the server and don't start it from the client, it goes at 24MB/sec instead of 2-6. That doesn't make a lot of sense to me but that does appear to be the case.

I am not ruling out some potential network issues with the new server, however, even the copy performance from the server to itself seems slow. I wasn't getting the performance I expected when I was copying all my data to it (38MB/sec or so) but I chalked that up to just poor E-SATA performance (I was copying from 2 5-bay E-SATA enclosures).

Could it be any of my settings? I went with RAID6, 128K stripe size, 16K clusters, write-through. I would have to check again but I believe other than that my settings are more or less defaults. I did not upgrade the firmware on all of the Hitachi drives; some of them are on 3EA but not all of them.
 
Last edited:
Using 8 SAS drives, Seagate 'Constellation' 2TB ST3200044SS
Both controllers have the Max hardware cache of 4GB

ARC-1680ix Raid5 write, 128K stripe, 8M write buffer size
ARC-1680-RAID5-write128k.png


ARC-1880ix Raid5 write, 128K stripe 8M write buffer size
ARC-1880-RAID5-write128k.png


ARC-1680ix Raid5 read, 128K stripe, 8M read buffer size
ARC-1680-RAID5-read128k.png


ARC-1880ix Raid5 read, 128K stripe, 8M read buffer size
ARC-1880-RAID5-read128k.png


These are writes to the raw physical device, not files, to rule out any
effect of the system buffer cache. One normally expects a downward
curve on MB/sec as one progresses from cylinder zero to the spindle,
since modern drives have more sectors on outer cylinders, and the
outer cylinders will show more MB/sec than cylinders near the spindle.

On the ARC-1680ix, this pattern is apparent, although the MB/sec
bounces around quite a bit on the right end, if you average it, you
will get the usual down sloping curve. The left end though is a little
flat, suggesting that even the 1680ix IOP348 processor might be maxed
out on calculating raid5 parity on the outer cylinders. Raid5 reads on
both the 1680ix and 1880ix show a nice curve starting at about 900MB/sec.

The 'cache rush' (cache quickly filling) on start of the write test
is very apparent on the 1880ix, with a huge MB/sec spike right
at the start. The 1680ix also has a cache rush, but not nearly
as pronounced. Our own tests on the 1880ix
in raid0 (no parity generation) show it really screams at moving data
around compared to the 1680ix. Others have shown 2500-3000MB/sec
on the 1880ix in raid0 with SSD drives.

It seems that Raid5 writes to the 1880ix are more or less a straight
(flat) line, not the usual downward curve, averaging around 450 MB/sec,
suggesting that something is limiting Raid5 parity generation or other
bottleneck? Maybe there is new firmware coming or maybe I am
doing something wrong? The 1680ix (at cylinder 0) reads at about 900MB/sec
and Raid5 writes at about 800MB/sec, showing the controller is almost
keeping up with what the drives are capable of. The 1880ix (at cylinder 0)
also reads at about 900MB/sec while only Raid5 writing at about 450 MB/sec,
showing about a 50% drop in write performance, suggesting something is
getting bogged down in Raid5 writes. Any comments? thanks
--ghg
 
Using 8 SAS drives, Seagate 'Constellation' 2TB ST3200044SS
Both controllers have the Max hardware cache of 4GB

ARC-1680ix Raid5 write, 128K stripe, 8M write buffer size
ARC-1680-RAID5-write128k.png


ARC-1880ix Raid5 write, 128K stripe 8M write buffer size
ARC-1880-RAID5-write128k.png


ARC-1680ix Raid5 read, 128K stripe, 8M read buffer size
ARC-1680-RAID5-read128k.png


ARC-1880ix Raid5 read, 128K stripe, 8M read buffer size
ARC-1880-RAID5-read128k.png


These are writes to the raw physical device, not files, to rule out any
effect of the system buffer cache. One normally expects a downward
curve on MB/sec as one progresses from cylinder zero to the spindle,
since modern drives have more sectors on outer cylinders, and the
outer cylinders will show more MB/sec than cylinders near the spindle.

On the ARC-1680ix, this pattern is apparent, although the MB/sec
bounces around quite a bit on the right end, if you average it, you
will get the usual down sloping curve. The left end though is a little
flat, suggesting that even the 1680ix IOP348 processor might be maxed
out on calculating raid5 parity on the outer cylinders. Raid5 reads on
both the 1680ix and 1880ix show a nice curve starting at about 900MB/sec.

The 'cache rush' (cache quickly filling) on start of the write test
is very apparent on the 1880ix, with a huge MB/sec spike right
at the start. The 1680ix also has a cache rush, but not nearly
as pronounced. Our own tests on the 1880ix
in raid0 (no parity generation) show it really screams at moving data
around compared to the 1680ix. Others have shown 2500-3000MB/sec
on the 1880ix in raid0 with SSD drives.

It seems that Raid5 writes to the 1880ix are more or less a straight
(flat) line, not the usual downward curve, averaging around 450 MB/sec,
suggesting that something is limiting Raid5 parity generation or other
bottleneck? Maybe there is new firmware coming or maybe I am
doing something wrong? The 1680ix (at cylinder 0) reads at about 900MB/sec
and Raid5 writes at about 800MB/sec, showing the controller is almost
keeping up with what the drives are capable of. The 1880ix (at cylinder 0)
also reads at about 900MB/sec while only Raid5 writing at about 450 MB/sec,
showing about a 50% drop in write performance, suggesting something is
getting bogged down in Raid5 writes. Any comments? thanks
--ghg
Can you try RAID6 on the 1880 please?
 
@pyrodex: generally speaking if you're responding to the last post, just respond without duplicating that entire post over again in a quote - people know you're referring to the preceding post. keeps threads a lot cleaner. :)
 
Last edited:
Same as last posting, except these are RAID6 as somebody requested.
Also stripe size is 128K, which is the maximum. The 1880ix raid5 write 'sloweness'
on write doesn't appear to be affected by stripe size, tried 4K, 64K, 128K
and they all seemed similar.

eight Seagate 2TB Constellation drives (ST32000444SS)

1680ix Raid6 write, 128K stripe, 8M write buffer
ARC-1680-RAID6-write128k.png


1680ix Raid6 read, 128K stripe, 8M read buffer
ARC-1680-RAID6-read128k.png


1880ix Raid6 write, 128K stripe, 8M write buffer
ARC-1880-RAID6-write128k.png


1880ix Raid6 read, 128K stripe, 8M read buffer
ARC-1880-RAID6-read128k.png


So, raid6 does pretty much what is expected.. all are 'slower' by one
drive.. (compared to Raid5)

The random seek test (yellow dots), looks pretty slow on the 1880ix
as well.. not really sure how the internals work on the hdtune pro seek (access)
test.. but did notice the yellow dots all over the place.

--ghg
 
I think there's something else going on with your configuration because the IOP isn't the limitation - I've seen other benches with an 1880ix-12 w/ 4GB cache and write tests were just fine, and I know the integrated LSI expander on 1880ix cards is solid. Maybe try a different PCIe slot, or cables, or whatever else you can think of. Also, I assume you're also running the STORPORT driver and not scsiport. Also try switching "Disk Write Cache Mode" in Areca management GUI under System Configuration from "Auto" to "Enabled". If I recall, Auto means write-cache is enabled only if BBU is present, but I'm not 100% sure. Here are some benches with an 1880i connected to an HP SAS expander:

8 x Hitachi 2TB deskstar (RAID5) 128k stripe, 2MB Block Size in HDTune

Areca_1880i_HP_SAS_Expander_Dual_Link_Hitachi_2TB_x_8_RAID5_HDTune_2MB_Write.png


16 x Hitachi 2TB deskstar (RAID5) 128k stripe, 2MB Block Size in HDTune:

Areca_1880i_HP_SAS_Expander_Dual_Link_Hitachi_2TB_x_16_RAID5_HDTune_2MB_Write.png


24 x Hitachi 2TB deskstar (RAID6) 128k stripe, 2MB Block Size in HDTune:

Areca_1880i_HP_SAS_Expander_Dual_Link_Hitachi_2TB_x_24_RAID6_HDTune_2MB_Write.png
 
Last edited:
Its got the caching issue. Look at the Access times on his compared to yours.
 
I posted this in my build log thread but as it pertains to the discussion I thought I would also add it here:

When I do a read test (2MB blocks) with HDTune Pro I get:
516.3 MB/s min
925.4 MB/s Max
781.5 MB/s Avg
Access time: 13.4ms
Burst Rate: 969.0 MB/s
CPU Usage 2.2%
I can post a screen shot if necessary. It started at 900MB/sec or so and dropped down steadily on the graph to 500MB/sec or so by the end. I am doing reads here (as opposed to the other graphs with writes) but are my access times unusual as well? Is the 'caching' issue resolvable (settings)? Or is this an RMA situation?

I connected the HP Expander to the 1880i using two 8087 cables; one I had previously and one that came in the 1880i box. The performance test above is with just ONE cable connected (I disconnected one to see if it solved my issues, it did not). I would have to take another look at it this evening when I get home to determine for sure which slots on the SAS expander I used. I recall looking at the SAS expander thread to decide which to use.

Is there any way to run write tests with HDTune Pro without deleting the partition and/or is there another adequate benchmark tool to use?

I was using the scsiport driver, could this be causing my performance issue? I just went with the driver it automatically located on the driver disc. I have swapped the driver to storport and rebooted but it doesn't appear to be making any performance difference. 60MB/sec copying to the array from itself, 5-6MB/sec from the network, 30MB/sec or so when initiating the copy from the server with files from another computer

As a side note Odditory has much the exact same configuration as I do (1880i + HP SAS expander with Hitachi Drives in RAID6, he has24, I have 11, I am sure we have different motherboards though). I would be curious how our system configuration, hddPower Management, RAIDSet/Volume set settings etc compare. I wish there was a way to dump the configuration settings to a text file but I don't believe that there is. I would really like to compare your settings that are "working" to mine and change accordingly.

@Odditory

What firmware level are your Hitachi drives? Are they all on 3EA or on a variety of FW levels?
 
Last edited:
Has anyone provided any stats between the storport and scsiport drivers? Just got an Areca 1680 and as usual with IT needed a bit of coaxing in. One issue I had was that if I tried to use the storport driver Windows 2008 complained that my BIOS was not up to date and the installation failed. With the scsiport driver that worked first time. I'm getting reasonable speed between this and my P800 during a copy operation with background initialization going:

P800Areca.png


Would the storport driver make this better?

Finally, I had one disk failure which I guess like a few posts above I need to check out. The worrying this is that when I replaced this and rebooted another disk went "missing" so I only had 10 out of my 12 disks active. Switched off, the disk was spininng so it was on. I've reinserted it and all is well. Could that be down to staggered spin up? Used to a P800, minimal options, the Areca has tons in comparison!! Thanks.

Sorry - other stupid question - my fan is at 0 rpm according to the status page, does it only come on when it is needed?
 
Last edited:
Thanks to everybody for chipping in.

Using the storport driver, and caching is set to 'enabled' (not auto), so it
is definately doing write-back, not write-thru

My guess is to what is going on:
The 1880ix on raid0 write to 8x Seagate 2TB Constellation drives screams
at 1200 MB/sec at cylinder 0. A single 2TB Constellation drive reads/writes
at 150 MB/sec at cylinder 0, so 8x is 1200 MB/sec and the 1880ix does that.

This means the onboard expander is handling the data, as well as the host,
PCI slot (in an X16 graphics slot), Dell 570 AMD quad core for testing.

Doing raid5 writes, this drops to about 450 MB/sec.. where is 'should be'
7x 150MB/sec = 1050 MB/sec (one drive is parity), so thruput should be
7x on an 8-drive raid5. (or 6x on a raid6). that is if the raid5 engine is fast
enough.

My guess is that the Constellations are so fast (150 MB/sec), that the
1880ix raid engine cannot quite keep up at that rate, so the drive 'miss revs'
which drops write speed in raid5 by around 50%, which is what I am seeing.

I tried some old 'slower' drives (8x WD2500's) which run at 60 MB/sec at cylinder 0.

Those showed raid0 write of about 450 MB/sec, and raid5 write of about 400MB/sec,
with the usual down sloping curve from cylinder 0 to the spindle, no flatspots.

This suggests, the 1880ix raid engine completely kept up with 8x a 60MB/sec
older drive.

On Odditory's post of the 8x Hitachi 2TB Deskstar, that was a 'nice' curve
showing the raid engine kept up over the whole drive.. It would be nice to
know the exact model of that Hitachi 2TB Deskstar, and especially the
single drive write speed at cylinder 0. Based on that hdtune graph, which to
me looks like about 800MB/sec (after the cache rush) at cylinder 0, so
800/7 = 114.28 MB/sec for the guestimated single drive cylinder 0 write speed???

Looking at the 12x and 24x raid 5 writes, they 'go flat' which indicates that the
raid engine is not keeping up with drive hardware speeds (or maybe some
expander limit). The 12x Hitachi 2TB shows the first 3/4 of the write graph
'flat topped' at just above 1250 MB/sec, and the 24x drives show the
whole graph flat at 1250 MB/sec..

We know the 1880ix (in raid 0, no parity needed), can deliver 2600-3000 MB/sec
from other benchmarks I have seen using SSDs.

If the raid5 parity engine could completely keep up, then using a single
drive speed of 114.28 MB/sec for the Hitachi 2TB, then the 12x raid5 write
speed should be 11 x 114.28 = 1257 MB/sec and the 24x drives should be
23 x 114.28 = 2628 MB/sec or double what the bench showed.

This tells me that for Hitachi 2TB drives that run 114.28 MB/sec at cyl 0,
that 11 or 12 drives is the maximum for an 1880ix-24 while maintaining
maximum raid5 write performance.

Oddity, does a raid5 read on the 24x Hitachi 2TB array deliver around 2628 MB/sec
at cylinder 0?

thanks again for all your input.. Any comments?
--ghg

ps. All the near-zero access times you guys are posting...
I just upgraded from HDtune pro 4.01 to 4.60, and now
my 'terrible' access times look like yours, at nearly zero, so
I suspect it is some artifact with the new HDtune pro
on raid controllers.. Access times look line an SSD!

--ghg
 
Back
Top