ARECA Owner's Thread (SAS/SATA RAID Cards)

We know the 1880ix (in raid 0, no parity needed), can deliver 2600-3000 MB/sec from other benchmarks I have seen using SSDs.
They were not really benching ssds they were benching the raid controller cache.

Why they even bother posting cache benches in the storage section is beyond me since it just seems to spread misinformation about what certain raid controllers are really capable of with actual drives and not cache.
 
^ a big +1 on that because then you get inevitable followup threads with people complaining that their performance numbers aren't adding up.
 
This afternoon my replacement should be in, then a day or so? for a foreground init, then some performance tests with my new 5x F4 2TB ecogreens. Will keep you posted if this solves my performance issue.
 
A person at work here thought up what might be a good use of an ARC-1880ix-24

Deploy Solaris-X86 on a real fast Intel machine with lots of cores and ram.
Connect up 24 Seagate 2TB Constellation drives and configure the 1880ix
as 24 pass thru drives, and let Solaris ZFS combine/software raid all
the drives into a giant filesystem. That way, it will not be held back by
a slow raid engine.

Allocate 20 or so drives (3000 MB/sec max) to the software array, which is
probably my guess of where the 1880ix would max out in non-parity
operations, and leave the last four drives as 'hot spares'.

I have a 4 core test system with 6GB ram, and 8X 2TB Constellation drives that
xfer at 150MB/sec @ cylinder 0.. will see what happens..fun project.

Linux can also do that also. I once built a huge Linux software raid, back in 2001
that had 156 80GB hard drives. (Areca did not exist back then)

Has anybody else done that?
--ghg
 
hi everyone,

I was forwarded to this thread by Odditory from HP SAS expander thread. Hoping someone in here who has knowledges in this area can answer/help me. Here is the thing, I have a raid 6 and raid 5 (16 drives total) on Arc-1680ix-24 that has been running great. I also have a spare arc-1260-16 ports that I thinking swapping with the 1680 controller. I plan to use the 1680 in another box that can handle 24 drives. Is it possible to move arrays between this two controller? will 1260 recognize my raid 6 and 5 during boot up and not lose data? thanks
 
its normally no problem moving arrays between areca controllers, and as for your question about drive order in the other thread, its true that you don't have to maintain the original drive order that existed when the array was created - as long as all array members present the Areca card will bring the volume up fine. Also make sure that if you put multiple arrays onto the same controller that the volumes be assigned different LUNs. You can change the LUN in the "Modify Volume Set" screen. Some people bring up two arrays with the same default LUN (0/0/0) and then have a minor freakout when the card only acknowledges one of the volumes at bootup.

A word of caution: I would *strongly* recommend that anyone moving drives between cases or controllers keep the original drive order, reason being if you're ever faced with having to recover data from a broken array then it becomes infinitely more difficult with the wrong order. At the least consider marking the drives with their original slot number using a permament marker before mixing them up.

I've found this out the hard way recently when murphy's law struck and a 24-drive raid6 array had its raidset/volume metadata get corrupted while I was backing up the data to another array (flaky motherboard ended up being culprit, not areca card). Because over time I had migrated the drives between several different cases (Supermicro -> Norco 4220 -> Norco 4224) the drive order got jumbled because I was lazy and didn't understand the value of maintaining the order. Had I done so, recovering the data would be easy and I could create a NO-INIT array and volume, but now i'm faced with having to analyze and compare every drive at the sector level to determine their original order - I'll get the data back but its absurdly time consuming.
 
Last edited:
its normally no problem moving arrays between areca controllers, and as for your question about drive order in the other thread, its true that you don't have to maintain the original drive order that existed when the array was created - as long as all array members present the Areca card will bring the volume up fine. Also make sure that if you put multiple arrays onto the same controller that the volumes be assigned different LUNs. You can change the LUN in the "Modify Volume Set" screen. Some people bring up two arrays with the same default LUN (0/0/0) and then have a minor freakout when the card only acknowledges one of the volumes at bootup.

A word of caution: I would *strongly* recommend that anyone moving drives between cases or controllers keep the original drive order, reason being if you're ever faced with having to recover data from a broken array then it becomes infinitely more difficult with the wrong order. At the least consider marking the drives with their original slot number using a permament marker before mixing them up.

I've found this out the hard way recently when murphy's law struck and a 24-drive raid6 array had its raidset/volume metadata get corrupted while I was backing up the data to another array (flaky motherboard ended up being culprit, not areca card). Because over time I had migrated the drives between several different cases (Supermicro -> Norco 4220 -> Norco 4224) the drive order got jumbled because I was lazy and didn't understand the value of maintaining the order. Had I done so, recovering the data would be easy and I could create a NO-INIT array and volume, but now i'm faced with having to analyze and compare every drive at the sector level to determine their original order - I'll get the data back but its absurdly time consuming.

Interesting..... I know the older Adaptec cards could not deal with out of order drives but since the Raid meta is stored on the drives I guess it makes sense that some cards can these days.

Apparently we will need a minor firmware update for SAS 3Gbit card 3TB support, not that there is a big hurry with the sorry state of 3TB availability at the moment.
 
FYI, I've petitioned Areca to make a few enhancements to their WebGUI and firmware to avoid people in corruption situations making things unintentionally worse:

- In the System Configuration screen, add "Auto Rebuild Raid" with "Enabled/Disabled" dropdown. There are situations where you do not want the card rebuilding the volume automatically.

- Enhance the way raidset/volumeset information signatures are written to the disk. Currently Raidset and Volumeset info gets written to sector 1 and 2, and I believe they use the last sector of the disk for something as well. There's plenty of unused bytes left in those sectors to be able to maintain a "backlog" of previous raidset/volumeset signatures, so that in certain data recovery circumstances you'd be able to roll back to say the second or third or even eighth oldest signature.

- Ability to backup and restore raidset/volumeset signature information through a file downloaded via the WebGUI.
 
Last edited:
So, here's a good question that I wonder if anyone can answer. What's the compatibility with LSI's new 6gbit expanders? Looking at getting an 1880x and a Supermicro SC847E26-RJBOD1 (or the 36 bay version) for a project. Any thoughts?
 
Seeing as how the integrated expander on the 1880ix cards *is* the LSISAS2x36 chip, the only thing holding compatibility back would be any firmware anomalies if you're talking about using an 1880x with the Supermicro chassis. My overall experience with SAS-2.0 interoperability between products is that its far greater than the screwiness that was SAS-1.0 interop. I'd be very surprised if you encountered any compatibility given they're both SAS-2 spec products.
 
Last edited:
its normally no problem moving arrays between areca controllers, and as for your question about drive order in the other thread, its true that you don't have to maintain the original drive order that existed when the array was created - as long as all array members present the Areca card will bring the volume up fine. Also make sure that if you put multiple arrays onto the same controller that the volumes be assigned different LUNs. You can change the LUN in the "Modify Volume Set" screen. Some people bring up two arrays with the same default LUN (0/0/0) and then have a minor freakout when the card only acknowledges one of the volumes at bootup.

A word of caution: I would *strongly* recommend that anyone moving drives between cases or controllers keep the original drive order, reason being if you're ever faced with having to recover data from a broken array then it becomes infinitely more difficult with the wrong order. At the least consider marking the drives with their original slot number using a permament marker before mixing them up.

I've found this out the hard way recently when murphy's law struck and a 24-drive raid6 array had its raidset/volume metadata get corrupted while I was backing up the data to another array (flaky motherboard ended up being culprit, not areca card). Because over time I had migrated the drives between several different cases (Supermicro -> Norco 4220 -> Norco 4224) the drive order got jumbled because I was lazy and didn't understand the value of maintaining the order. Had I done so, recovering the data would be easy and I could create a NO-INIT array and volume, but now i'm faced with having to analyze and compare every drive at the sector level to determine their original order - I'll get the data back but its absurdly time consuming.

Thanks for confirming this and the details info Odditory. Unfortunately, last night I was curious and impatient so I went ahead and removed all the existing cabling without noting the drive order:mad: I did booted up the system and log on the card management interface to see it would recognized the arrays but all I see was all drives listed as available(don't remember exact term). I then shutdown the system as I don't want to make it worst and lose my data. I plan to boot it up again with just one array connected when I get home later. If no go than will put the 1680 back and hope everything will be OK. anyone have other suggestion please let me know. thanks
 
don't worry you're not going to lose any data. i've rearranged drives many times and reconnected and been fine.

"data dies hard" and the only way to lose data on an array is to do something to intentionally overwrite it, like say re-initializing a volume or zero-filling an array member drive when connected in JBOD mode. in other words you'd have to work at killing your array data enough to be unrecoverable, the same as when you quick-format a JBOD drive the previous partition data still lives on the raw sectors until they get overwritten by something else.

but you're not nearly at that point. so just make sure all the drives are connected when you power on and the card should read the signatures from the drives and mount the volume.
 
Last edited:
OK - I am new here and hope I am not going to break any of the rules which I did read. I joined this forum today after many searches for a solution to a problem with the expansion of my Areca ARC-1231 RAID-6 array. I bought the system 2 years ago and it came to me with 12 1TB Seagate Barracuda Disk Drives (ES.1). The ones with a lot of problems. I was lucky enough to have bought the system from a reputable dealer - Sam at www.datoptic.com - he has never let me down and I have never lost any files. This was the 3rd RAID system I bought from him. Anyway, after I got the system, the Seagate disk drives began to die. About one drive went bad every few weeks and I mailed it back to him and he sent a replacement. Then one day 2 disk drives went bad at the same time after I had the system for about 3 or 4 months. I made the mistake of removing one of the good drives and even though I put it back in the same slot - the system would not accept it. I wound up having to mail all the disk drive to the dealer and he sent them eventually to areca in Taiwan and when they came back - all the files were there. Happy story.

What does that have to do with the expansion? I am getting to that. I have been extremely unhappy with my Seagate drives for this and another reason. Seagate had a problem which they tried to hide (at least I think they did) which caused the drives to go into a wait or freeze for 5 to 50 seconds. Multiply that by 12 drives in a RAID array and you have a long wait which I saw very often and this made the system lockup untill the timeout elapsed. For this reason - I decided to get rid of Seagate and upgrade at the same time to Hitachi 2TB drives. I only have one RAID system and it is too big for a backup. I have been doing this without a backup all this time. Yes - I know you are supposed to have backups. I have been a computer programmer (mostly on IBM mainframe computers) since 1966.

I used the procedure of taking out one of the old disk drives at a time and letting the system rebuild each drive before repeating with the next disk drive until all the drives were rebuilt with the 2TB Hitachi drives (about 70 hours). Then I tried to expand the system without luck. The problem was that the solution to how to do this was not documented and the keywords and phrases I used to search for a solution did not bring me to either of the two threads which did have the solution (after I knew what to look for).

That is the real reason for this post. To document the solution so that it can be found by google. One of the two places which did publish the solution was here on HardForum and I am proud to be a new member now. I have copied the link to that thread here:

http://hardforum.com/showthread.php?t=1356904

Now I will document what was not on that thread here. If the monitors of this forum think this should have been part of that thead, please feel free to move this post.

I went to "Modify Volume Set" where there is a "Max Capacity allowed" which was set to 10000.0 GB (10TB) and could not be changed. I seached for "Max Capacity allowed" along with areca and found no solution. I searched the areca website and read their firmware changes with no luck. I am on firmware 1.46 and considered upgrading to 1.48 - but was told this would not help. I am including this information - not because it is needed but to allow someone searching for that to find this solution.

The solution turned out to be undocumented.

1. Go to "RAID Set function" -> "Rescue Raid Set"

3. Use key word: RESETCAPACITY (RAID Set name)
ex: RESETCAPACITY Raid Set # 000

4. Go to "Volume Set function" -> "Modify Volume Set" and you will see the new capacity.

This did not work - it was necessary to modify step 3 to use:
RESETCAPACITY Raid Set # 00

Then you go to Modify Volume set and change the Volume capacity right under "Max Capacity Allowed" to the new maximum and it will then begin the expansion rebuild which for me took about 5 hours.

Now there was mention in the other thread which I referenced above about another undocumented and better known fix - namely:

if raid not found, perform these commands:
"RESCUE"
reboot
"SIGNAT"
"LeVeL2ReScUe" (case sensitive) (!!!)
reboot (array should be back)
"SIGNAT"

That did not work before I had to send my disk drives to Taiwan.

Now I think I have had my say. Slap my wrist if I did this wrong - but please keep enough of it to allow someone to find this who only knows about "Max Capacity allowed" not being changable.
 
That's bizarre - and I'm not sure exactly what you're trying to document a solution for.

I assume you realize that "Max capacity allowed" for a volume set depends on how big your Raid set is. When you add drives for capacity expansion it's a 2-step process: you first expand the raid set, and then once that finishes you expand the volume set to grow into that new space. Did you do that? Some people don't understand the distinction between raidset and volumeset because a lot of people use the 'Quick Create" function for their first array, which does both processes in one shot and so they assume subsequent expansions/modifications are also a one-step process.

I'd also caution anyone against using the "Level2Rescue" command without first contacting Areca support, in situations where you're trying to recover the raidset signature and rewrite it to all member drives. That command is undocumented for a reason, they only instruct people to use it case-by-case because it has the power to make a bad situation worse -- I speak from personal experience. There are some scenarios where you definitely don't want to run that command.
 
Last edited:
I removed all sata cables from controller and connect one array at a time. the controller see all drive in the array as "Free" under drive information. no raid set. same thing with second array. all listed as free drives. not sure what to do now.
 
ARC 1680ix-24

2010-10-28 22:49:37 E6 TP=74;SN=PMA6 Recovered
2010-10-28 22:49:37 E6 @ Recovered
2010-10-28 22:49:07 E6 TP=74;SN=PMA6 Removed
2010-10-28 22:49:07 E6 @ Removed
2010-10-28 22:48:37 E6 TP=74;SN=PMA6 Failed
2010-10-28 22:47:37 E6 TP=74;SN=PMA6 Recovered
2010-10-28 22:47:37 E6 @ Recovered
2010-10-28 22:47:07 E6 TP=74;SN=PMA6 Failed
2010-10-28 22:47:07 E6 @ Failed
2010-10-28 22:45:37 E6 TP=74;SN=PMA6 Removed
2010-10-28 22:45:37 E6 @ Removed
2010-10-28 22:45:07 E6 TP=74;SN=PMA6 Failed
2010-10-26 20:11:36 Enc#6 SES2Device Time Out Error


any idea what the heck this is? I see "device" above as E6, but I don't see any media errors listed for any drives. This is dealing with my SAS JBOD enclosure, but doesn't seem to specify what drive, or if it's the enclosure itself having issues?

Running a check volume now on it.


my drive list is:

E3SLOT 01 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 02 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 03 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 04 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 05 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 06 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 07 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 08 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 09 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 10 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 11 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 12 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 13 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 14 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 15 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 16 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 17 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 18 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 19 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E3SLOT 20 | 2000.4GB| RaidSet Member |WDC WD20EADS-00R6B0
E4PHY#0 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#1 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#6 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#7 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#8 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#9 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#10 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E4PHY#11 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#4 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#5 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#6 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#7 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#8 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#9 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#10 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330
E5PHY#11 | 2000.4GB| RaidSet Member |Hitachi HDS722020ALA330


20 of the E3 drives hooked straight up to card in raid6
16 of the E4-E5 drives hooked up via a SAS JBOD box in raid6


SAS Expander under "Hardware Monitor"

E#6:XYRATEX RS1603-SAS-01 06
────────────────────────────────
2400 2400
TP=74;SN=PMA65667022633
OK TP=74;SN=PMA6566
7022634 OK
35
 
Last edited:
Has anyone else noticed that areca cards are not good at dropping bad drives?

They seem to just keep the bad drives regardless of the problems they cause and I end up having to manually pull problem drives since areca has no drive removal feature.
 
That's bizarre - and I'm not sure exactly what you're trying to document a solution for.

I assume you realize that "Max capacity allowed" for a volume set depends on how big your Raid set is. When you add drives for capacity expansion it's a 2-step process: you first expand the raid set, and then once that finishes you expand the volume set to grow into that new space. Did you do that? Some people don't understand the distinction between raidset and volumeset because a lot of people use the 'Quick Create" function for their first array, which does both processes in one shot and so they assume subsequent expansions/modifications are also a one-step process.

I'd also caution anyone against using the "Level2Rescue" command without first contacting Areca support, in situations where you're trying to recover the raidset signature and rewrite it to all member drives. That command is undocumented for a reason, they only instruct people to use it case-by-case because it has the power to make a bad situation worse -- I speak from personal experience. There are some scenarios where you definitely don't want to run that command.


Hi Odditory,

Apparently you missed the fact that I did not add any disk drives and therefore there was no need for the "expand raid set" which only works when you add new disk drives to expand an array. I only replaced my current disk drives with larger capacity disk drives - thus obviating the need for a "expand raid set". However - I still needed to do the "modify volume set" but was unable to because the "Max Capacity allowed" did not automatically pick up the new space for the new disk drives. That required the new and undocumented command.

I agree about not trying to use the Level2Rescue without supervision and I did have that help at the time. I tried to shorten the saga as much as possible and left out the fact that Seagate now has a firmware fix for their disk drives (NOT WELL DOCUMENTED).

I also left out the fact that after you succesfully expand the raid array on areca - you are not done if you are using Open-e NAS-R3 or their DDS R6. You have to expand the volume there as well and you may be in for a nasty surprise as this operating system comes with a 16TB drive size limit. I had to purchase an "expansion license" for 4 more TB and that cost me $209. Once you tell NAS-R3 to expand the volume, you will loose access to all your files until you pay for the upgrade license. If I should want to double my capacity with new 4TB drives in a few years - I would need to part with another hefty fine of about $1300.00. I am not a large commercial company - just a home user and this seems like highway robbery to me. I was unaware when I purchased the system of this limit - but now I am. I just learned about FreeNAS and will replace Open-e on my next upgrade.

http://en.wikipedia.org/wiki/FreeNAS

I hope this satisfies your question about what I was trying to document. If not - I am willing to try again and willing to be contacted in by phone.
 
Last edited:
I get it -- I missed the part about you swapping in 2TB's for the 1TB's. Something like that works great with ZFS, but you can't really blame the controller because as you know each 2TB drive you swapped in was treated as a 1TB member drive to match the other members. I would expect pretty much any hardware raid controller to act like that, and so of course the Max Capacity Allowed was maxed based on all your 2TB being treated as 1TB member drives.

I'm glad to hear Areca created the RESETCAPACITY command for situations like this and thanks for writing about it, glad it worked out for you. Pretty unorthodox though!
 
update: I decided to put 1680 controller back on since I don't know what else to try and both arrays are detected again by the original controller. Does anyone know why wouldn't my 1260 controller see either array created by 1680 card? Is it not backward compatibles? thanks
 
that's a good question, never had a 1260. i would send an email to support (at) areca (dot) com (dot) tw. they are usually pretty responsive - most times i'll get an answer within a few hours assuming its within business hours in Taiwan.

@pgjensen: i would also write to areca support with that log -- its bizarre. is that log output from the VT100 interface? how long have you been having the problem? my guess is maybe there's a contention between the integrated expander on the 1680ix-24 and the expander in the JBOD chassis - maybe the firmware isn't allowing them to coexist peacefully. but if it's been working for a while and just cropped up all of a sudden, thats weird. i guess i'd try disconnecting all the drives connected to the ports on the Areca card and see if it goes away, then vice versa.
 
Last edited:
Took Odditory advised and Areca support came back with the following.

Dear Sir/Madam,

you can not move a raidset from SAS controller to SATA controller no matter which drive type the array used.
because array created by SAS controller have more complex array structure than arrays created by SATA controllers.

so you can move a raidset from SATA controller to SAS controller but not SAS controller to SATA controller.




With that respond I guess I either buy another SAS controller or stick with the setup I has:(. again, thanks for your help Odditory.
 
Just an update here on my part, my supplier sent me a replacement unit for my 1880ix-24. The new card is showing a huge improvement over my last card, which would give me about 30MB/s throughput once the write cache was filled. The new card is doing over 400 MB/s with background initialization after expansion. The only thing I have to look into now is I don't really see a performance boost from the cache, but all I have to test from right now is a RAID5 5-disk set on a Highpoint Rocketraid 3560 that's in the same system until the data is moved. So 400MB average read speed from there isn't too shabby...
 
Can anyone try dual arrays (raid 6 and raid 1) on a new 1880 to see the potential performance hit?
 
Seeing as how the integrated expander on the 1880ix cards *is* the LSISAS2x36 chip, the only thing holding compatibility back would be any firmware anomalies if you're talking about using an 1880x with the Supermicro chassis. My overall experience with SAS-2.0 interoperability between products is that its far greater than the screwiness that was SAS-1.0 interop. I'd be very surprised if you encountered any compatibility given they're both SAS-2 spec products.

would you feel the same about the Astek A33606-PCI?
 
would you feel the same about the Astek A33606-PCI?

Sure, but haven't tested one. I have to admit that Astek expander looks nice. MOLEX powered and dual SFF-8088 ports for cascading, only thing it lacks is two uplink ports for dual-linking with the host card, something the HP does have. Then all you have to do is add a Supermicro JBOD power control board for $40, PSU, Norco RPC-4224, cables and you're done. Unfortunately that expander costs twice the price of the HP Expander, the only real downside since people are spoiled now with the relatively low cost of the HP.
 
Looks like a new beta firmware has been posted for the ARC-1880 series. I've applied it to one of my two 1880i's but haven't done any testing yet to notice a difference. I'm waiting to hear back from Areca on changelog since I'm trying to wrap up a formal review/write-up of the card.

ftp://ftp.areca.com.tw/RaidCards/BIOS_Firmware/ARC1880/Beta/Build101006/

there is also one for the 1680

Code:
ftp://ftp.areca.com.tw/RaidCards/BIOS_Firmware/ARC1680/Beta/Build101105/
 
Hi Guys,

A note about my continuing saga (struggle) with a 12 drive RAID-6 system: reference first post 5 days ago (11-09-2010). I have to suppose most of you guys are familiar with a company called Open-e (www.open-e.com) - they are one of the companies who sell operating systems to make your raid system look like a single hard drive to your Windows system. Their systems (called NAS-R3 or DDS R6) come with a license for up to 16TB. I guess I did see that when I first explored my system when it was new and shinny - you know how you go out to kick the tires on your new car. Well - of course I promptly forgot about this until I upgraded from 12 1TB Seagate ES.1 drives to 12 2TB Hitachi drives and configured it for a single volume RAID-6 of 20TBs. I had a few problems doing this with the areca side (but did get it done - see my previous post if you plan to do that); and then came the easy part - a volume set expansion on my NAS-R3 operating system. I told it to expand and BAM - I am dead in the water. I cannot access my RAID system at all. No grace period - no nothing. I called Sam at www.datoptic.com where I originally bought the system and he said, "Oh, yeah. I forgot about that." There is a 16TB limit on the license.

So I took a look at the www.open-e.com website - products, Storage Extension Keys, Buy Now and WHAM - LOOK AT THAT - they want $209.00 to extend my system to 20TB and my system is now a white elephant paper weight until I buy an extension. I called Sam and asked if he could get a better price - he did ($199.00 and it took Open-e a whole day to give him the license).

OK - So you think this story is over - NO - it is just getting good. I give the system the new license and will this make it work? Yes and No. First - YES because I can now use my original 10TB RAID data. But NO because I cannot expand my system to the 20TB I set it up for. No way. So Sam called Todd (an Open-e GURU) and we had a 3 way chat. First Todd explained that NAS-R3 is a 32bit operating system and that it is clearly documented on their website that it cannot support volumes of over 16TBs. You have to upgrade to DDS R5 (or R6) in 64bit mode to use a single volume of more than 16TBs. Not only that, but you have to copy the data from your old system to the new 64bit system. I still don't understand why the actual data is written differently on a 64bit system - maybe one of you guru's could enlighten me on that. Anyway - so this can't happen unless you have 2 systems. One to put the new 64bit system on and the old one to copy the data from to the new system. Holy Cow! I am just a home user and have only one system and went out on a limb just to do that. In fairness to Todd and open-e; he did offer me a free upgrade from NAS-R3 to DDS R5 (something he valued at several hundred dollars - DDS R6 would be $525). I asked him to take the license off my system and refund my money. He said this was not possible even though I had it for less than an hour - he was signed on to my system remotely and had full access.

I learned one other thing from Tood which I am thankful for - he explained that just having a RAID-6 system does not mean you do not need a backup. He said he has customers who have cables go bad which write bad data on the RAID and the entired data set is lost. Now I am also looking for a cheap way to back up 20TBs of data maybe once a month. If anyone knows of a good way to do this, please let me know either by email at [email protected] or in this thread. Thank you in advance for any information on ways to deal with this much data. I do not have - nor can I afford to have a second 20TB sytem for backup. I am looking for other options please.

One other little fact made it into my little brain while I was looking at the open-e extension key prices. If I feel I want to do another upgrade to 4TB disk drives in a few years - no matter if I am on their new system or not - I will have to pay them an additional $1323.00 at today's price for the privilege of the license upgrade. I just can't see myself paying this. I know they are in business to make money, but... Caveat emptor - right? Let the buyer beware.

Well, I decided not to stick with open-e (Todd was very mad - I wasted his precious time for something which was only $200. For a large company - maybe this is chump change - but not for me. I found out there is a free operating system to replace open-e called FreeNAS (reference the article on wiki - http://en.wikipedia.org/wiki/FreeNAS ). Sam told me it did not work well enough at the time I purchased my system to trust - but it does now. I have mailed my system back to Sam and he will take care of moving my data from the old to the new after installing FreeNAS for me and the charge is no more than the open-e charge would have been. And I won't have to pay $1300 to someone for using larger disk drives in the future.

I am not sure this is the correct place for this post - but I feel it is a continuation of my previous post. Moderators please feel free to move it if you wish. The purpose of this post was to let people know about the single volume size limit and the price of upgrading if they do not already know this.

Bye the way - a license for 16TB does not mean you can have several 16TB volumes on this system. It means you can only have a combined sum of the size of all your volumes not to exceed 16TB.

I hope this will help someone else to avoid the pitfalls I fell into. Sam now sells new systems with FreeNAS installed - but is willing to install any system you ask for. Caveat emptor. A word to the wise...;)
 
Last edited:
I've had a 1260 running two arrays for a while now. Recently I've had two different 2TB seagate ST32000542AS drives show up as 'removed' and 'inserted' again, about 4 seconds apart each time. This is a RAID5 of four 2TB drives. The machine was in a rather idle state at the time, nothing was beating on the drives (neither these or any others on the controller). The events occurred at different times. I was not near enough to the drives to hear whether they spun down or not. The controller immediately began rebuilding, which took about 8 hours each time (and succeeded).

Any ideas on what happened? Or suggestions on how to debug it further?
 
Not sure if this has been covered earlier in this thread, can't find it at least...

ARC-1880ixl-8 (2xSFF-8087, 1xSFF-8088) no expander? (can't see one at least)
ARC-1880ixl-12 (3xSFF-8087, 1xSFF-8088) no expander? (can't see one at least)
ARC-1880ix-12 (3xSFF-8087, 1xSFF-8088) expander, no slots that doesn't go via the expander?
ARC-1880ix-16 (4xSFF-8087, 1xSFF-8088) expander, no slots that doesn't go via the expander?
ARC-1880ix-24 (6xSFF-8087, 1xSFF-8088) expander, no slots that doesn't go via the expander?
 
The external ports on the 'ix' cards have always been direct to the IOP instead of the expander. I've been a bit curious myself on the 1880ixl cards though. I couldn't spot a chip for an expander anywhere and I thought the IOP only had 8 lanes.
 
there is also one for the 1680

Code:
ftp://ftp.areca.com.tw/RaidCards/BIOS_Firmware/ARC1680/Beta/Build101105/

according to Areca the new beta firmware for the 1880 and 1680 mainly added support for SED's (Self Encrypting Drives), rather than performance tweaks. so i'd say no need to upgrade for now.
 
The external ports on the 'ix' cards have always been direct to the IOP instead of the expander. I've been a bit curious myself on the 1880ixl cards though. I couldn't spot a chip for an expander anywhere and I thought the IOP only had 8 lanes.

good question. AFAIK the IOP is 8 lanes natively, but I think they're employing "SAS Multiplexing" to get 12 lanes to converge into 8, though I'm not exactly sure how it works in their implementation. My hunch is that the link switching burden simply falls on the IOP in this scenario. case in point I had an ARC-1680 - the first rev of the 1680 series - which had 1 x SFF-8088 and 2 x SFF-8087, and no onboard expander, and I could have drives connected to all three connectors.

I believe SAS Muxing is part of the SAS spec but haven't done too much reading on how it works.
 
good question. AFAIK the IOP is 8 lanes natively, but I think they're employing "SAS Multiplexing" to get 12 lanes to converge into 8, though I'm not exactly sure how it works in their implementation. My hunch is that the link switching burden simply falls on the IOP in this scenario. case in point I had an ARC-1680 - the first rev of the 1680 series - which had 1 x SFF-8088 and 2 x SFF-8087, and no onboard expander, and I could have drives connected to all three connectors.

I believe SAS Muxing is part of the SAS spec but haven't done too much reading on how it works.
I knew about the mux on the original 1680 cards, but I wasn't sure if all the ports could be used simultaneously. Not really sure why they changed things for the 1880ixl cards as the 1680ixl cards have an expander chip (smaller than the 1680ix cards though). Hard to figure out the implementation just by looking at the card. Circuit diagram would help, but I doubt they'd ever hand that over. Could be as simple as 4 or 8 (depending on the model) 2:1 multiplexers with the select line being tied to a clock, but that would cut the available bandwidth in half, so I'm really not sure how they're doing it.
 
Hi

Got to the end of an initialisation, immediately went into a rebuild. No error messages in the list.

Capture1.png


The drive fingered as I guess causing the issue doesn't appear to have any issues.

Capture2.png


Any ideas as to why this happened, or how I could find out?

Thanks
 
Back
Top