ARECA Owner's Thread (SAS/SATA RAID Cards)

Does anyone know if there is a way to reset the admin password? I bought a card second-hand off of eBay and the default password is not working. I can get into the card BIOS and that password works, but not the one for the web management console.

There is a master hardcoded password in some of the older cards, can only be used in the bios.

mno974315743924

See if that works. Works in the 11xx and 12xx cards for sure, not sure about newer cards but give it a shot.
 
Yes, that password works. Or, I should say worked. I was able to use that password to get into the BIOS and change the BIOS password. What I can't log into is the web management console through the NIC interface. I know the default username/password is admin/0000, but that is not working. I'm trying to find a way to reset it. The previous owner must have changed it.
 
Password should be consistent be it BIOS access, telnet, or the web server last I checked.
 
Wow, I feel like I'm going crazy here. I must not be communicating this effectively. I have been using Areca controllers for over 10 years and there has always been two separate passwords:

1) Master password (default MNO974315743924) - This is the password used to protect the option ROM BIOS and needs to be entered once you are in the BIOS to make any changes.

In this case, for the ARC-1260 that I just picked up off of eBay, this password worked fine and I was able to create the RAID set and then I changed the password using the BIOS option for the password.

2) WebGUI username/password (default admin/0000) - this is the password used to log into the web management console either using the ArcHTTP locally installed management tool or when logging into the built-in out-of-band web management interface that runs on cards that have a built-in NIC. This web GUI can be used for online access to configure/monitor/update firmware without requiring a reboot as well as accessing the management console from remote computers..

This is the password that is not working. I have tried using the the default WebGUI password, the default master password, and the new master password that I set. None of them work. In this case, I was able to complete what I was trying to do (upgrade firmware) by booting to DOS and upgrading it in that fashion. However, I would REALLY like to be able to use the web management console. I was hoping there was a way to change this password without being able to log in such as by using the CLI or some other method.
 
Wow, I feel like I'm going crazy here. I must not be communicating this effectively. I have been using Areca controllers for over 10 years and there has always been two separate passwords:

1) Master password (default MNO974315743924) - This is the password used to protect the option ROM BIOS and needs to be entered once you are in the BIOS to make any changes.

In this case, for the ARC-1260 that I just picked up off of eBay, this password worked fine and I was able to create the RAID set and then I changed the password using the BIOS option for the password.

2) WebGUI username/password (default admin/0000) - this is the password used to log into the web management console either using the ArcHTTP locally installed management tool or when logging into the built-in out-of-band web management interface that runs on cards that have a built-in NIC. This web GUI can be used for online access to configure/monitor/update firmware without requiring a reboot as well as accessing the management console from remote computers..

This is the password that is not working. I have tried using the the default WebGUI password, the default master password, and the new master password that I set. None of them work. In this case, I was able to complete what I was trying to do (upgrade firmware) by booting to DOS and upgrading it in that fashion. However, I would REALLY like to be able to use the web management console. I was hoping there was a way to change this password without being able to log in such as by using the CLI or some other method.

As Blue_Fox said, there is only 1 password. If you change the BIOS password then that same password should work in the WebUI. If it doesn't then maybe there is something wrong with the card. I have 4 different model Areca cards and they all work that way.
 
hi

i really need your help guys

i have a very big library (over 100 TB ) on synology DS3615xs and i do need to do a back up to another server constantly (daily sync) so i find this used server :

Supermicro 4U Server

CSE-847

36x 3.5″ Bays (24 in front / 12 in back)
36x Trays Included

X8DTN+ Motherboard
2x Intel Xeon E5620 2.4ghz Quad Core 12mb Cache 5.86 GT/s
8x 4gb DDR3 Memory
SIMLP-3+ IPMI Remote Access Card
2x 1400w Power Supplies


the main thing here that it dont have its own raid card and i know nothing about raid cards so i need you to help me to find the right raid card for this server (36 HDD) with a reasonable price which support : ( SAS and is a low profile card so it can fit into the chassis )


and sorry if i put this in a wrong thread
 
Last edited:
Do you know which backplane that chassis has (pictures or full model name should say)? That's going to be a crucial piece of information concerning which RAID card you can use or if it will even be possible to use a single one.
 
Do you know which backplane that chassis has (pictures or full model name should say)? That's going to be a crucial piece of information concerning which RAID card you can use or if it will even be possible to use a single one.

http://www.supermicro.com/products/chassis/4U/847/SC847E1-R1400LP.cfm

this is the same server that i want to buy (the seller didnt specify the backplane included but i believe it will be the same)

SAS and is a low profile card can control at least (36 HDDs)

thanks for reply

update :

i contact the seller and he told me the server have (BPN-SAS2-826EL1) backplane
 
Last edited:
As Blue_Fox said, there is only 1 password. If you change the BIOS password then that same password should work in the WebUI. If it doesn't then maybe there is something wrong with the card. I have 4 different model Areca cards and they all work that way.

That's odd, none of the cards I have worked with have worked that way. Of course, they have all been older legacy cards (ARC-1220s and ARC-1260s mostly). In this case it is an ARC-1260. Either way, the password I set in the BIOS is not working in either the ArcHTTP login or for the built-in McRAID web console.
 
http://www.supermicro.com/products/chassis/4U/847/SC847E1-R1400LP.cfm

this is the same server that i want to buy (the seller didnt specify the backplane included but i believe it will be the same)

SAS and is a low profile card can control at least (36 HDDs)

thanks for reply

update :

i contact the seller and he told me the server have (BPN-SAS2-826EL1) backplane
Just about any Areca SAS card will work then. Areca 1880i/1882i/1883i would be optimal. If you do not need RAID functionality or want non-Areca options, any LSI SAS card should work well too like the 9211-8i or newer.

That's odd, none of the cards I have worked with have worked that way. Of course, they have all been older legacy cards (ARC-1220s and ARC-1260s mostly). In this case it is an ARC-1260. Either way, the password I set in the BIOS is not working in either the ArcHTTP login or for the built-in McRAID web console.
Any luck with telnet?
 
Just about any Areca SAS card will work then. Areca 1880i/1882i/1883i would be optimal. If you do not need RAID functionality or want non-Areca options, any LSI SAS card should work well too like the 9211-8i or newer.

thanks for reply

i search for these cards online and they are expensive if you know some card on (200-300 dollar) range will be great (support raid - low profile - support up to 100 HDD)

i have a storage server (synology nas) and need it to do another storage server for back up
 
Any problems with buying used off eBay? Here's an example of a decent modern card in that price range that supports 128 disks: http://www.ebay.com/itm/151916078200

Then pick up a low profile adapter for $10: http://www.ebay.com/itm/131382787620

There are a few other LSI card models under $300 if you look around too. The Areca ones I mentioned can pop up in that price range from time to time too.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Any problems with buying used off eBay? Here's an example of a decent modern card in that price range that supports 128 disks: http://www.ebay.com/itm/151916078200

Then pick up a low profile adapter for $10: http://www.ebay.com/itm/131382787620

There are a few other LSI card models under $300 if you look around too. The Areca ones I mentioned can pop up in that price range from time to time too.

this is a good price thanks

but is there a redundancy way so if a raid card failed dont crash the volumes and data in the hdds with it or should i just replace it with other card and every things go back to normal ? (i am a real beginner here but hope you get the idea)
 
As an eBay Associate, HardForum may earn from qualifying purchases.
There are, but none are going to fit in your budget. RAID cards seldom fail though, so I wouldn't worry about it too much.
 
Last edited:
There are, but none are going to fit in your budget. RAID cards seldom fail though, so I wouldn't worry about it too much.

i will buy some card in 200-300 dollars range for now but considering to add "raid back up or what ever it called" and want to know if that is possible thing

so if a raid card fail all the arrays and data will fail too with no chance to repair them ? (what are the chances for this to happen)
 
It's a lot more than you think. You need SAS drives, dual port backplanes, etc, on top of software that can handle that. It's not something one does for home use.

To answer your other question, generally, no, if a RAID card dies, you usually don't lose your data. Just attach a replacement RAID card that's similar (ie brand) and import the disks and your array should still be intact.
 
It's a lot more than you think. You need SAS drives, dual port backplanes, etc, on top of software that can handle that. It's not something one does for home use.

To answer your other question, generally, no, if a RAID card dies, you usually don't lose your data. Just attach a replacement RAID card that's similar (ie brand) and import the disks and your array should still be intact.

i see

so if a raid card dies "which is unusual right" ? i should still have my data ?

also i read about something called "CacheVault flash cache protection and heat-tolerant battery backup" will this Prevents a card from dying or what is it job ?
 
I have a couple of 188x cards with battery backup. The batteries are failing and I'm wondering if anyone knows a source for the battery only without having to buy a whole new package from Areca...Thanks!
 
i see

so if a raid card dies "which is unusual right" ? i should still have my data ?

also i read about something called "CacheVault flash cache protection and heat-tolerant battery backup" will this Prevents a card from dying or what is it job ?
Data is all stored on the disks, so yes. There are rare exceptions, but I wouldn't worry about it too much since you should have backups (RAID is not a replacement for backups after all).

The battery backup and CacheVault option is to prevent data loss when using the on board cache (cards often have 1GB of DDR3) in the event of a power outage. The battery keeps the cache powered for a day or two until you are able to restore power.
I have a couple of 188x cards with battery backup. The batteries are failing and I'm wondering if anyone knows a source for the battery only without having to buy a whole new package from Areca...Thanks!
I believe it's just a sec of 3 AA or 18650 batteries in there. I'm pretty certain I've seen someone replace the batteries in there themselves. Might have even been in this thread? Should be able to find it with a bit of searching I think.
Does anyone know if there is a maximum drive size supported by the 1880?
None that I'm aware of. Even the oldest Areca cards had 2TB+ support. Plenty of people running 8TB drives in them.
 
Data is all stored on the disks, so yes. There are rare exceptions, but I wouldn't worry about it too much since you should have backups (RAID is not a replacement for backups after all).

The battery backup and CacheVault option is to prevent data loss when using the on board cache (cards often have 1GB of DDR3) in the event of a power outage. The battery keeps the cache powered for a day or two until you are able to restore power.

yes i will have two systems with identical data on them (one of them is synology ds3615xs with 36 bays and the new one which i will buy soon supermicro with 36 bays)

there is one thing left
can i use a wd red for the supermicro server ( i dont need a speed or performances only the reliability and compatibility with the sas back plane )

i read on supermicro page that i can only install sas or "enterprise sata" disks

so is red an enterprise disks "the normal red not the pro one" or not ?

i will use the server for daily back ups from the synology one and as plex transcoding server for HD videos
 
Last edited:
Supermicro recommends enterprise drives, however just about anything will work (including those WD Red drives). I have a mix of desktop drives in my 36 bay chassis connected to an Areca 1880i with no issues.
 
Supermicro recommends enterprise drives, however just about anything will work (including those WD Red drives). I have a mix of desktop drives in my 36 bay chassis connected to an Areca 1880i with no issues.

well i just received a message from the seller says the server will only support up to 2 tb drive and this is really confused me (i will only install 6 Tb drives to match my other synology server)

this is the server on ebay :

http://www.ebay.com/itm/Supermicro-...506344?hash=item35f3614ae8:g:OMkAAOSwEeFVSlze

the back plane as i told from the seller is : BPN-SAS2-826EL1

and i already bought a raid card : LSI SAS 9271-8i

is this server really cant take over 2 tb drive ?

thanks
 
As an eBay Associate, HardForum may earn from qualifying purchases.
The older backplanes had issues with larger drives (regardless of whether or not they were enterprise), but that part number is for a newer one. I've had no issues 2TB+ drives in mine (newer model). I imagine with a bit of searching, you'll find reports confirming this too.
 
I know this is late, but thought I'd chime in anyways. I'm running a mix of 2TB (Hitachi), 4TB (Seagate) and 6TB (WD Red) on a BPN-SAS2-826EL1 backplane with no issues.
 
I own an ARC-1231ml-4G - I had purchased a 3rd party 4G DDR2 stick for it when I originally got the card. Its been a few years but I recall there being some back-and-forth about whether or not the card supported 4G.

I'm just wondering if the ARC-1883ix-8G I just purchased might also support a 16GB stick of DDR3-1600, ECC/Unbuffered memory if that might also work...

Thanks for your time
 
The 1231 definitely supported 4GB sticks. I don't think I've seen anyone reporting 16GB sticks working on the 12gbit models as they've literally only just become available. There's only a single one available on newegg, so they're very rare. I don't see why they wouldn't work however.
 
That's odd, none of the cards I have worked with have worked that way. Of course, they have all been older legacy cards (ARC-1220s and ARC-1260s mostly). In this case it is an ARC-1260. Either way, the password I set in the BIOS is not working in either the ArcHTTP login or for the built-in McRAID web console.

for my 1220, the web login was Admin & <blank> (as in, nothing)...give it a shot, can't hurt
 
I have a ARC-1882IX-16 that will only run with PCI-E link speed of 2X/8G. My motherboard is a Supermicro X10SRL-F. I have an Intel X520 card that is running at 8X/5G as expected, as well as a ARC-1882LP, that is also running at 8X/5G as it should. I have moved the cards around, but the 2X speed follows the 1882IX-16. I'm on the latest firmware on both the mobo and 1882. I also forced the 1882IX to run in PCI-E 2.0 mode, but I was still only getting 2X/5G.

Defective controller, or some compatibility issue between Areca and Supermicro?

CPU on the Supermicro is an E5-2683 (ES or engineering sample), but I can't imagine that having anything to do with, could it? I only mention it because I know the PCI-E slots are connected directly to the CPU starting with the LGA-2011 sockets.

Intel X520 running at 8X:

PCI-E%20info1.PNG


Areca 1882 running at 2X:

PCI-E%20info2.PNG
 
If you move it to another slot in the same board (for example, the same slot the intel is now in) does it make any difference?
 
No difference. I also just took the card and dropped into another rig with an Asus Maximus VI Hero mobo and it still comes up as 2X/8G. So the issue seems specific to the controller for sure.
 
Is it possible it only needs 2x link width with the current raid array on the 1883? If more lanes were needed maybe the card would use more?

Noob answer but just trying to be helpful
 
So my issue is that file transfers are maxing out at around 750 MB/s. I should be able to get twice that and saturate my 10 Gig connection between servers. So I'm thinking the 2x is the bottleneck.

I got 4 raid sets on the controller in this server. 4 x Samsung 840 Pro in RAID0. 12 x 6TB WD Reds in Raid 6, 12 x 4 TB Seagate in Raid 6 and 24 x 2TB Hitachi in Raid 6 (2nd enclose connected via 4 x 6Gbps SFF-8088 cable). I should be seeing at least 1,500 MB/s copying from the 24 drive raid 6 array to the raid0 SSD array. I have also tried copying to a RAM disk and again, my speed is stuck at around 750 MB/s.

Friend of mine has a pair of 1880 controllers (I have 1882 controllers) and he's maxing out his 10 Gig network all day long syncing between servers.

I have started the process of returning this particular controller and have purchased another one. Hopefully that will take care of the issue.
 
Update. I provided Area with a screenshot of the BIOS post message, which is as follows:

1882BIOSboot.PNG


Given that is shows both 2X/8G and # Channels 8, they agreed that the controller appears defective. Hopefully once the new controller gets here, the issue will go away.

Just for grins, I did run the Crystal DiskMark on 3 of the arrays and got some odd results:

4 x Samsung 120 GB 840 Pro SSD RAID0:

crystalssdraid0.PNG


24 x Hitachi DeskStars 2TB RAID0 (HDS722020ALA330):

crystalraid024xhitachi.PNG


12 x WD RED 6TB RAID6:

crystalraid612xwd6tbred.PNG


Those 4k numbers are pretty awful with the 6TB reds...
 
So my issue is that file transfers are maxing out at around 750 MB/s. I should be able to get twice that and saturate my 10 Gig connection between servers. So I'm thinking the 2x is the bottleneck.

I got 4 raid sets on the controller in this server. 4 x Samsung 840 Pro in RAID0. 12 x 6TB WD Reds in Raid 6, 12 x 4 TB Seagate in Raid 6 and 24 x 2TB Hitachi in Raid 6 (2nd enclose connected via 4 x 6Gbps SFF-8088 cable). I should be seeing at least 1,500 MB/s copying from the 24 drive raid 6 array to the raid0 SSD array. I have also tried copying to a RAM disk and again, my speed is stuck at around 750 MB/s.

Friend of mine has a pair of 1880 controllers (I have 1882 controllers) and he's maxing out his 10 Gig network all day long syncing between servers.

I have started the process of returning this particular controller and have purchased another one. Hopefully that will take care of the issue.

The PCIE lanes don't explain the limit. 2x PCIE 1.0 would be 500MB/sec, 2x PCIE 2.0 is 1,000MB/sec. You're falling right in the middle, so you can't be on 1.0 or you wouldn't see 750 and since you must be 2.0 or greater you must have at least 1,000MB/sec in bus bandwidth which is greater than your 10Gbe.

This is the same result you got with ZFS too. Have you tried doing a ramdisk to ramdisk test? The limit could very well be on the network side, either drivers, hardware, or cable related.
 
Thanks. Wasn't sure what the translation was between PCIe GT/sec and MB/sec.

Card is running PCIE 3.0 so 2 x 8GT/sec, which, if I'm doing the math right translates to 1600 MB/sec. So you're right, that should be plenty.

I'll see about doing some ramdisk to ramdisk tests. Also need to redo my ifperf tests now that I'm running a direct link between the Intel X520 NICs in the servers, which rules out my switch being the bottleneck.
 
I got my replacement 1882IX-16 card and it runs at the expected 8X/8G PCIE speed, so all is well on that front now.

I went ahead and setup a 32GB RAM disk and did some tests writing to it from the 3 arrays on the new 1882 controller:

12x6TB WD Red array:

WDRaid6toRAMdisk.PNG


12x4TB Seagate array:

SeagateRaid6toRAMdisk.PNG


4x120GB SSD RAID0 array:

Raid0toRAMdisk.PNG


I would have expected the SSD RAID0 array to be significantly faster than the RAID6 arrays, but I suppose with data striped across 12 disks in RAID6, it performs about the same as 4 SSDs in RAID0.

But regardless, is looks like any of these 3 arrays will be able to saturate my 10Gb/s network link. Just waiting on the RAM to get here for the other server, and I'll get it up and running and begin testing across the network.

Just for grins, I ran Crystal on the RAM disk:

crystalRAMdisk.PNG
 
But regardless, is looks like any of these 3 arrays will be able to saturate my 10Gb/s network link. Just waiting on the RAM to get here for the other server, and I'll get it up and running and begin testing across the network.

Interesting fodder for computer geeks. I guess it is necessary to set up a RAM disk to properly test these arrays, anything else would be too slow.

Do you have any further info on your 10Gb network links? This is just starting to hit the mainstream and is catching my interest. Is CAT6A cable adequate?
 
In my case I use Intel X520 SFP+ NICs (single port is about $60 and dual port about $120 in eBay). These are 8X PCIe 2.0 low profile cards (typically include full height brackets as well). They can connect via SFP+ Twinax cables, or via fiber using SFP+ transceiver, the type needed depending on the distance you need.

In my case, I have SFP+ transceivers and fiber cables to connect to my switch, which is a UniFi 48 port GigE with 2 SFP and 2 SFP+ ports.

My setup is as follows:

Workstation
|
Intel X520 SFP+ single port 10.0.1.53
|
|
|
UniFI switch
|
|
|
Intel X520 STP+ dual port #1 10.0.1.51
|
Production Server
|
Intel X520 SFP+ dual port #2 10.0.2.51
|
|
|
Intel X520 SFP+ dual port #1 10.0.2.50
|
Backup server

Since I only have 2 SFP+ ports on my switch, only my workstation and production server can be directly connected to it. I then have a direct 10Gbps link between the 2 servers using a twinax cable on a separate subnet. The backup server also has a GigE connection to the switch on the main subnet for Internet access and general remote access.
 
Last edited:
In my case I use Intel X520 SFP+ NICs . . .

Interesting, thanks. I guess we should not be surprised at fiber winning out over copper even in the home. I suppose I can employ my CAT6A as pull ropes to get the fiber through the walls. :D
 
Back
Top