ARECA Owner's Thread (SAS/SATA RAID Cards)

Hello Everyone!

I am happy owner of an ARC-1880, Firmware V1.49, for many years. I find myself needing to build a new (much larger) volume. I'm looking for 10TB or greater per disk. RAID6. Looking to grow to roughly 7 drives with global hot spare. I have several questions...
1. Is anyone aware if this card supports (or has issues with) newer high capacity disks?
2. Are there HDD brand/capacity recommendations or limitations?
3. I currently have 7x2TB drives, allowing space for 5 more drives in my current chassis. Is there a preferred method to building this larger capacity volume and migrating data over?
You should upgrade to 1.56, which fixes a number of drive issues corrected since 1.49. For size, I have a few boxes with 1880's with Seagate 16TB EXOS drives with no issues. That being said, YMMV with any drive not on the HCL. As to migrating your data, the easiest way is to have 5 new higher capacity drives setup as a R6 and then move your data over. At this point you can pull the 7x2, or use them as additional storage.
 
Hello Everyone!

I am happy owner of an ARC-1880, Firmware V1.49, for many years. I find myself needing to build a new (much larger) volume. I'm looking for 10TB or greater per disk. RAID6. Looking to grow to roughly 7 drives with global hot spare. I have several questions...
1. Is anyone aware if this card supports (or has issues with) newer high capacity disks?
2. Are there HDD brand/capacity recommendations or limitations?
3. I currently have 7x2TB drives, allowing space for 5 more drives in my current chassis. Is there a preferred method to building this larger capacity volume and migrating data over?
I am still running a 1680ix with HP SAS expander. Mainly for home HD/4K video via Plex and bunch of other VMs and stuff on a 16 core JDM_waaat build with GA-7PESH2.
I have 3 arrays in a Norco 4224 and have stuck with Hitachi/HGST almost exclusively (see below for models).
Raid 6: 11x2TB, Raid 5 4x6TB, and Raid 6 4x10TB - all rock solid.
The 10TB drives were renewed from Amazon to save some money.
PS SATA300 doesn't get saturated with spinning platters so upgrade to SATA600 wasn't worth it for my purposes
The only issue is the newer 10TB+ drives have power disable pins you will need to put a nonconductive material (tapcon tape) on to use in an older chassis (see picture - unless you don't use hot swap, then you can use the power adapter they include).
Ive heard the gripes about raid rebuilt times but frankly Ive been rock solid and when I went through a major upgrade it took 12-18 hours for the larger populated arrays.
Screen Shot 2021-06-22 at 2.38.39 PM.png
IMG_1655.thumb.jpg.4c2ad314d65b5a81a309227e5cf43567.jpg
 
  • Like
Reactions: Hook-
like this
Is anyone aware of a decent rack-mountable chassis that sports 12x3.5 hot-swap, supports mATX 19x19 mobo, full-height pci (ARC-1880), and is shallow (no larger than 19" deep)?
 
Last edited:
Does anyone know if an arc-1882ix-12 will work in a dell T620?

I can't seem to get system to boot with card.

Any suggestions?
 
Does anyone know if an arc-1882ix-12 will work in a dell T620?

I can't seem to get system to boot with card.

Any suggestions?
What other PCIe cards are in the box? Does it just black screen and no display, or does it get to a point and just lock up? Do you get a response from Caps Lock/Num Lock if it locks up? Have you tried other PCIe slots? Do you have 1 or 2 CPU's in your T620?
 
Hi all,

Guys, I have such a question. I wanted to read the values of individual disks, I will succeed, BUT I am not able to find out which disk it is :-(

SLOT 01 is drv= ? 10, 15, 20 ....

How do I find out? I thought SMART would write me the serial number of the disk.

Thank you for help
 

Attachments

  • disc slot_01-12.png
    disc slot_01-12.png
    37.8 KB · Views: 0
  • smart_drv_12.png
    smart_drv_12.png
    20.6 KB · Views: 0
Hi all,

Guys, I have such a question. I wanted to read the values of individual disks, I will succeed, BUT I am not able to find out which disk it is :-(

SLOT 01 is drv= ? 10, 15, 20 ....

How do I find out? I thought SMART would write me the serial number of the disk.

Thank you for help
Log into the BIOS either through McBIOS or the archttp and go to Physical Drives -=> View Drive Information and it will show you the Model and Serial Numbers.
 
Log into the BIOS either through McBIOS or the archttp and go to Physical Drives -=> View Drive Information and it will show you the Model and Serial Numbers.
Hi,
no there is nothing like that :-(. Please send screens. Thanks
 
Hi,
no there is nothing like that :-(. Please send screens. Thanks
I can't find it that way either on my 1883, but give this a try. What I do is in archttp go to Information => RAID Set Hierarchy. From this page, you can click on any of the devices to get their serial number and other info.
 
I can't find it that way either on my 1883, but give this a try. What I do is in archttp go to Information => RAID Set Hierarchy. From this page, you can click on any of the devices to get their serial number and other info.
Here is a link to the 1883 manual. Go to page # 76-77 and you will see screenshots of how you can find it exactly as I laid out above.
 
Here is a link to the 1883 manual. Go to page # 76-77 and you will see screenshots of how you can find it exactly as I laid out above.
That works for mcbios, but NOT archttp. I suspect Ondrej was trying to access it via archttp so that's why I gave that method. As you can see there's no such functionality in archttp.
1627838366207.png
 
That works for mcbios, but NOT archttp. I suspect Ondrej was trying to access it via archttp so that's why I gave that method. As you can see there's no such functionality in archttp.
View attachment 380488
That's weird. I am running archttp 2.6.0 and on an 1884 when I choose enclosure #4 (one of my expanders) when I choose the drive as you have above and click submit I get a popup that looks like the McBIOS telling me model, serial, etc.
 
Last edited:
I just picked up some Toshiba MG08ACA14TE 14TB HDs, does anyone know if the ARC 1883ix would support these drives? I have an old ARC 1880ix and it uses the drives in pass through but I can't get it to build any raid with them. So I was thinking of swapping the smaller drives out of the 1883ix card and moving them to the 1880ix and then I could built a new raid with the 14TB drives. I'm just hesitant if the card will work with these or not.

I'm trying to avoid doing all the work and then having to go all the way back if they don't work.

Another option is to maybe way wait for the newer 1886 cards or go with another raid card all together....?

Thanks for any information you might be able to pass along.
 
Hi,
I have an ARC-1882IX-24 controller and when I insert a Samsung SATA 250GB SSD into the frame where I have other disks, it displays a TIME out error on the web and, a FAILED error lights up and the whole web interface "freezes".

The SSD is fine, but it's causing problems with the controller.

SMART values are OK (SSD 250GB Samsung). So I don't know why it immediately throws a TIME OUT error :-(

Did anyone meet it?
 
Last edited:
Hello,

Has anyone used arc-1882ix-12 or other versions in dell servers?

I have a arc-1882ix-12 and a dell T620, and can't seem to get it stable. Card shows like its gone in device manager, but its in machine. Fan is running on it but can't access web interface either.

Hopefully someone has some ideas?
 
Hi all, I recently updated from an 1882i to a spanking new 1883x (I'd been buying 12tb hitachi SAS 3.0 drives for awhile in anticipation of this move). The process also swapped out an Intel RES2SV240 for an Adaptec 82885T - AFAICT it's completely identical to the Intel RES3FV288 for about $100 less), and new 12gb/s cables of course. The expander and drives are all in an external enclosure using the 5->3 IStar trayless cages, which gives 15 usable slots. Currently have 12 x 12tb drives, and am quite happy.

Sadly, all of this is in one RaidSet, which is currently just RAID5 with a hot spare. Yah, I know, I know - that's a horrible situation and I'm TRYING to get (back) to a more sane and robust layout. This is where the problems start, and my search for advice begins:

For "historical reasons" (relating to the sequence of operations while migrating from 6tb to 12tb drives awhile back), this RAID5 has two VolumeSets on it - "media01" (36tb) and "media02" (84tb), which were concatted via Linux LVM into one large XFS volume. The underlying filesystem (XFS) is only 84tb, and I've fiddled with the LVM layout to get everything onto the one larger VolumeSet ("media02"), leaving the smaller one ("media01") completely empty. I've even managed to remove the smaller VolumeSet ("media01") from the card config, thus "freeing up" the space. I was HOPING to be able to then use that free space to do a VolumeSet modify and convert RAID5 "media02" into a RAID6, followed by growing the volume with whatever space was left over.

I cannot find any way to do those things, however; and more confusingly I can't even seem use the free space to simply GROW the "media02" volume (e.g. keep it RAID5 and grow it). When I try to modify the existing volume, the "max size" is the current size. The only option the card is giving me would be to create a new 32tb volume again, putting things right back where I started.

I'm guessing that "media01", since it was created first, is physically located at the start of the drives, and thus "media02" can't be extended (backwards) with those extants. If that is correct, is there any way to ... "slide" the media02 volume so that it begins at the start of the RaidSet instead of the middle? Granted my logic is right, this should make both the "convert to RAID6" option and the simpler "grow the RAID5" approach available.
 
Last edited:
Hi all, I've been running the 1880i for about 10 years, and it has been great! It's currently still booting in Legacy, and I would like to finally make the switch to UEFI. After trying several different hardware and firmware configurations, I've had no luck trying to get this up and running! I've seen that a few other people have had problems with UEFI, but that was many years back in this thread. Whenever I switch to UEFI in my motherboard BIOS or disable CSM, the system no longer detects the RAID card. When I changed the BIOS selection in the raid card from Auto to UEFI, this thread was extremely valuable for learning how to un-brick my card!

After trying several of the things I've found in this thread, I've still had no luck getting this up and running in UEFI.

Things I've tried:
-->CSM disabled (disabling causes RAID card to not be detected)
-->CSM enabled with UEFI (causes RAID card to not be detected)
-->Areca Bios selection: Auto, UEFI, EFI (system not bootable with UEFI or EFI setting)
-->Changing the hardware slot that the card is installed in (both of the available x16 slots behave the same)
-->Manually setting the PCI speed from autodetect to generation 1 or 2 (no effect)
-->Upgrading the motherboard to the latest firmware (no effect)
-->Checking for more recent 1880 firmware (1.56 is latest, and was already installed)

Current system:
Areca 1880i Running Firmware 1.56 (latest available)
ASRock Z490 Phantom Gaming Mobo Firmware V1.2 (latest available non-beta)
Intel(R) Core(TM) i9-10850K CPU

Has anyone else had problems running this card in UEFI? Was there a solution?
 
Hi all, I've been running the 1880i for about 10 years, and it has been great! It's currently still booting in Legacy, and I would like to finally make the switch to UEFI. After trying several different hardware and firmware configurations, I've had no luck trying to get this up and running! I've seen that a few other people have had problems with UEFI, but that was many years back in this thread. Whenever I switch to UEFI in my motherboard BIOS or disable CSM, the system no longer detects the RAID card. When I changed the BIOS selection in the raid card from Auto to UEFI, this thread was extremely valuable for learning how to un-brick my card!

After trying several of the things I've found in this thread, I've still had no luck getting this up and running in UEFI.

Things I've tried:
-->CSM disabled (disabling causes RAID card to not be detected)
-->CSM enabled with UEFI (causes RAID card to not be detected)
-->Areca Bios selection: Auto, UEFI, EFI (system not bootable with UEFI or EFI setting)
-->Changing the hardware slot that the card is installed in (both of the available x16 slots behave the same)
-->Manually setting the PCI speed from autodetect to generation 1 or 2 (no effect)
-->Upgrading the motherboard to the latest firmware (no effect)
-->Checking for more recent 1880 firmware (1.56 is latest, and was already installed)

Current system:
Areca 1880i Running Firmware 1.56 (latest available)
ASRock Z490 Phantom Gaming Mobo Firmware V1.2 (latest available non-beta)
Intel(R) Core(TM) i9-10850K CPU

Has anyone else had problems running this card in UEFI? Was there a solution?
On my X399 mono, I have a 1882ix and 1880x and while they don’t show up in POST, Windows recognizes them and have had no issues. I have CSM turned on, but not sure if that makes a difference. Love these cards!
 
Hi all, I've been running the 1880i for about 10 years, and it has been great! It's currently still booting in Legacy, and I would like to finally make the switch to UEFI. After trying several different hardware and firmware configurations, I've had no luck trying to get this up and running! I've seen that a few other people have had problems with UEFI, but that was many years back in this thread. Whenever I switch to UEFI in my motherboard BIOS or disable CSM, the system no longer detects the RAID card. When I changed the BIOS selection in the raid card from Auto to UEFI, this thread was extremely valuable for learning how to un-brick my card!

After trying several of the things I've found in this thread, I've still had no luck getting this up and running in UEFI.

Things I've tried:
-->CSM disabled (disabling causes RAID card to not be detected)
-->CSM enabled with UEFI (causes RAID card to not be detected)
-->Areca Bios selection: Auto, UEFI, EFI (system not bootable with UEFI or EFI setting)
-->Changing the hardware slot that the card is installed in (both of the available x16 slots behave the same)
-->Manually setting the PCI speed from autodetect to generation 1 or 2 (no effect)
-->Upgrading the motherboard to the latest firmware (no effect)
-->Checking for more recent 1880 firmware (1.56 is latest, and was already installed)

Current system:
Areca 1880i Running Firmware 1.56 (latest available)
ASRock Z490 Phantom Gaming Mobo Firmware V1.2 (latest available non-beta)
Intel(R) Core(TM) i9-10850K CPU

Has anyone else had problems running this card in UEFI? Was there a solution?
I got it working!!!

I'm sure many here are adept enough at all of this that this isn't exciting knowledge, but I wanted to post here for anyone who might find this useful.

My computer enthusiasm was in its heyday at about the time that I bought this card, ~10 years ago. I booted from a RAID volume, and I didn't have any SSDs in my system. 10 years later, I had almost the same configuration, and I was REALLY needing an SSD. My goal was to get a couple of SATA SSDs in RAID 1 booting into Windows 11 and have my old HDDs in a separate larger RAID as storage. I had the impression that Windows 11 required UEFI and secure boot, as described in several articles. As described earlier, none of my volumes nor my DVD drive was recognized as a bootable UEFI device in my BIOS.

I configured my mobo to have Intel platform trust enabled, secure boot disabled, and CSM enabled with legacy support for the Areca card. After making a windows 11 bootable USB installer, I found that it would install to the RAID1 SSD drives with these settings! After feeding the windows 11 installer the newest storport drivers (2021/04/29 on areca's website) during the drive installation selection screen, it successfully found all of my RAID volumes. It installed to the RAID1 SSD with no problems, and windows 11 runs well! It seems like secure boot and UEFI wasn't needed in this case. A new boot option showed up after installation, 'windows boot manager,' and it seems to be the correct boot option to load windows 11.

I had read earlier in this thread that the long counting sequence for booting up the Areca card was avoided when UEFI was successfully implemented. I'm still experiencing a ~42 second counter every time I start my computer, so I tried a few configurations to get UEFI working. After disabling CSM, I found that there were once again no bootable devices or boot options recognized by my motherboard. However, setting the Areca card from Auto to UEFI resulted in the 'windows boot manager' as the one and only boot option recognized. The system now boots in UEFI without that countup! ...However, it still seems to take the full ~42 seconds anyway, so it didn't save any time.
 
I have a set of disks that used to be in an Areca RAID. They've been disconnected for quite a while (several years), and I don't know in what hardware order they were connected.

Do I have to reconnect them in their original order, or will the controller be able to figure it out and reassemble the RAID on it's own? If not, is there a way to check them individually to determine their placement?

And, perhaps more importantly, is there anything I should NOT do during this process? I understand the basics, like don't connect them to windows or do any kind of checking/testing/partitioning/formatting.

I'm interesting in extracting the files from the array and then recycling the drives. I don't expect to keep it running.

I still have the Areca controller and then PC. It sat unused in a rack the whole time, so I'd be firing the whole setup again. I'd likely start the PC on it's own first (without the controller) to confirm that it's still working ok.
 
I have a set of disks that used to be in an Areca RAID. They've been disconnected for quite a while (several years), and I don't know in what hardware order they were connected.

Do I have to reconnect them in their original order, or will the controller be able to figure it out and reassemble the RAID on it's own? If not, is there a way to check them individually to determine their placement?

And, perhaps more importantly, is there anything I should NOT do during this process? I understand the basics, like don't connect them to windows or do any kind of checking/testing/partitioning/formatting.

I'm interesting in extracting the files from the array and then recycling the drives. I don't expect to keep it running.

I still have the Areca controller and then PC. It sat unused in a rack the whole time, so I'd be firing the whole setup again. I'd likely start the PC on it's own first (without the controller) to confirm that it's still working ok.
When I moved to a new controller, I carried my existing array over and there was no hope of me connecting them in the right order. The new controller saw it fine, and showed the drives out of order. It worked fine, and was expanded from 5 disks to 8, and was converted from raid 5 to raid 6. There was some odd performance issues that went away when I remade the array, but I do not think that was a problem with order, it was simply the old controller's method vs the new. I would say make sure all drives are working, by going into the bios before windows loads. make absolute certain all the drives are connected fully with data and power. By staying out of the os you reduce the possibility of a partially online array writing any data, nullifying the disks that weren't detected from being part of the array and forcing a rebuild (or trashing the entire array if enough disks were missing.) Myself, I would disconnect the boot drive entirely until the controller bios looked good.
 
When I moved to a new controller, I carried my existing array over and there was no hope of me connecting them in the right order. The new controller saw it fine, and showed the drives out of order. It worked fine, and was expanded from 5 disks to 8, and was converted from raid 5 to raid 6. There was some odd performance issues that went away when I remade the array, but I do not think that was a problem with order, it was simply the old controller's method vs the new. I would say make sure all drives are working, by going into the bios before windows loads. make absolute certain all the drives are connected fully with data and power. By staying out of the os you reduce the possibility of a partially online array writing any data, nullifying the disks that weren't detected from being part of the array and forcing a rebuild (or trashing the entire array if enough disks were missing.) Myself, I would disconnect the boot drive entirely until the controller bios looked good.

Ok, good to hear someone's 'real world' scenario. I seem to recall it booted from the array, but I could be mistake. But I'll definitely be visiting the motherboard BIOS first (no doubt the clock is wrong) so I'll check the whatever it thinks is the boot order. I'll do this BEFORE putting any of the array drives into the machine (removable sleds). Then I'll check the Areca card firmware to see what it thinks, and then power down to load the drives. Once again, in to the Areca firmware first to see what's-what and take it from there.

With luck I'll be getting to this in about a week, I'll report back my results. THANKS!
 
Ok, good to hear someone's 'real world' scenario. I seem to recall it booted from the array, but I could be mistake. But I'll definitely be visiting the motherboard BIOS first (no doubt the clock is wrong) so I'll check the whatever it thinks is the boot order. I'll do this BEFORE putting any of the array drives into the machine (removable sleds). Then I'll check the Areca card firmware to see what it thinks, and then power down to load the drives. Once again, in to the Areca firmware first to see what's-what and take it from there.

With luck I'll be getting to this in about a week, I'll report back my results. THANKS!
Yeah sorry if I wasn't clear, I did mean the areca bios, you want to get in there. Even if that array is bootable, if you get into the config right away it shouldn't write any data. With any luck everything is detected and you have no issues!
 
No worries, I couldn't recall if the board had it's own, but now remember that it did. I'll want to make sure it's seeing itself as operating properly, as I believe it also had it's own battery backup attached. I'm guessing it'd probably be good to power the machine up for a while to get that battery recharged, or perhaps just disconnect it to take that variable out of the equation.
 
I made a cable up for my ARC-1680 using these instructions - I connected them like so:

RJ11_________DE9
Pin 1 ----------- Pin 7 (RTS)
Pin 2 ------------- Pin 2 (RXD)
Pin 3 -------------- Pin 3 (TXD)
Pin 4 -------------- Pin 5 (GND)
(Pin 5 & 6 not connected)

I can get to a prompt when I plug it in to the expander that says "Password" however it seems to send too many characters at once and "*" when I press enter - not sure if I'm getting the password wrong or there is some other issue.

My terminal settings are VT102 (no vt100 in minicom, should be compatible anyway) 115200 8 N 1

Anyone have experience with getting in to the expander CLI console?


Cheers!

Edit:
This is what happens when I type the default "0000" password and press enter until it actually sends the enter key:
Code:
 Password:**
  Password:**
  Password:****
  Password:*******
  Password:****
  Password:*******
  Password:**************
  Password:**
  Password:
  Password:****
I know this is old but came across this in but didn't have enough info to get it working.
This is for Arc-1680IX - might apply to other cards with expanders.


Using extraputty

Connect and press enter
Pasword is 0000


Try sys command for info
putty sys.png


To program you need to erase first
CLI>ER CODE
Erase Flash Region ...OK


Followed by
CLI> FL CODE
Use HyperTerminal or TeraTerm utility with Xmodem/1K mode to
transfer romXXXXX.bin.


Pick the bin file via putty. Must be Xmodem 1K . If you don't follow this it will brick your board.
Putty will display a progress bar


Transfer file
putty file transfer.png


loading bin.JPG

Repeat for data files
CLI>ER DAT2
CLI>FL DAT2

Use HyperTerminal or TeraTerm utility with Xmodem/1K mode to
transfer mfghbaYYYYY.rom.


rom file upload.JPG

Pick the rom file via putty as before. Must be Xmodem 1K . If you don't follow this it will brick your board.
Putty will display a progress bar.
Reboot will be required
All done

Hope it helps someone
 
Last edited:
Performance of 1680iX with just 1 drive attached. Did not think it would make any difference going directly or via card but the performance is way better via the raid card. I tried with cache disabled and read ahead disabled and made no difference. Again surprising as i would expect that was causing the spoeedup. Either way fantastic performance from a PCIe gen 1 card and this is sitting in slot 5 of a Dell T5810 which is only wired for x4.

This is a Hitachi HDS721010CLA320 1TB drive that came out of my old array. Both write speeds and read speeds are amazing for such a drive. The small file performance is simply awesome via the 1680iX and is as expected via the motherboard SATA port. Look at the 4kB reads and writes : 134MB/105MB/s compared to 18MB/26MB/s on the MB SATA port. also the max read speed gets real crazy at 500...600MB/s. Tried both as raid and pass through and the performance was the same. I was surprised it allowed me to create a one drive Raid 0. It was definitely a striped partition as Win10 complained about it when i connected it directly.

Areaca vs sata.jpg
 
1680IX with one drive now tested in SLOT 4 of Dell T5810 which allows the card to run x8 mode. Still better performance whch I was not expecting. Read and write performance improved even at small file sizes. Now reads max out at 1.2GB/s! No wonder people liked these for performance setups. I/O reads doubled for files larger than 32KB. The card I am testing seems to be recycled from NASA oceanweb program .

areca from nasa 2.jpg





pcie 8 vs 4.png


pcie 8 vs 4 IO.png



Cant complete with a NVME but still performs well considering it is a 12...15year old product.
nvme vs arc.png
 
Last edited:
1680IX with one drive now tested

You're basically just benchmarking the cache on the card, which explains the significant speedup versus the card running directly connected to the mobo SATA port. While the Areca card is undoubtedly a 'better' SATA controller, the maximum speed available to a drive on a single SATA 6 Gb/s port drive caps out at around 550-600 MB/s simply due to the limits of the interface. Your card manages to benchmark a single drive at > 1 GB/s read speed, which is *impossible* from a single SATA port. Thus, these results are only obtained by pulling data from the on-card cache.

That's not to say the results are invalid, but they *are* very situational. For example, you're testing in ATTO with the 256 MB file size option. The Areca 1680ix by default comes with a 512MB cache, though that can be expanded up to 4GB. The point is, the entire 256 MB ATTO test file is able to be held within the 512 MB cache; if you run a larger benchmark, results will likely dip back down much closer to the raw performance of your single drive.
 
You're basically just benchmarking the cache on the card, which explains the significant speedup versus the card running directly connected to the mobo SATA port. While the Areca card is undoubtedly a 'better' SATA controller, the maximum speed available to a drive on a single SATA 6 Gb/s port drive caps out at around 550-600 MB/s simply due to the limits of the interface. Your card manages to benchmark a single drive at > 1 GB/s read speed, which is *impossible* from a single SATA port. Thus, these results are only obtained by pulling data from the on-card cache.

That's not to say the results are invalid, but they *are* very situational. For example, you're testing in ATTO with the 256 MB file size option. The Areca 1680ix by default comes with a 512MB cache, though that can be expanded up to 4GB. The point is, the entire 256 MB ATTO test file is able to be held within the 512 MB cache; if you run a larger benchmark, results will likely dip back down much closer to the raw performance of your single drive.
Yes I understand the cache is the main source of the performance gain. The cache on this specific card is 2GB. Still impressive to see from a tech that is so old and running PCIe gen 1 perform so well.

Here is a 22GB file coping over to NVME disk. There was a delay to the start of the copy - I assume that is widows and or the card doing read ahead.

copy to nvme.JPG


Here is a write from NVME to the raid card -- after about 10GB the speed drops to 130MB/s.

.
from nvme to T.JPG

Large file ATTO

nvme vs arc 8gb.png
 

Attachments

  • 8gb  file.JPG
    8gb file.JPG
    50.8 KB · Views: 1
  • nvme vs arc 8gb.png
    nvme vs arc 8gb.png
    45.3 KB · Views: 0
Last edited:
Loved the Areca cards back in the day! Made a stack of spinning rusty snappy as PCI-E SSD!

Now all we need is a card that can take 16 nvme and hold a boatload (1-2TB) of DDR5-6000+ for cache! Would probably need a fast IOP chip, maybe an Apple M1 LOL. All connected to PCIE 5.0. Or a LOT of lanes on Threadripper. Individual temp monitoring and throttle detection, annunciator channels (discrete)...man this thing would cost an arm and a leg and a kidney. But it would be worth it for a die hard storage nerd! For laptops we need xpoint that isn't gonna melt a hole in our pants. ;-)
 
I have NCQ for sata turned off , i think mainly for historical reasons. Should i turn it on? Do i gain any performance? Any downsides? I run 8 sata-drives in raid 6.
 
I have NCQ for sata turned off , i think mainly for historical reasons. Should i turn it on? Do i gain any performance? Any downsides? I run 8 sata-drives in raid 6.
I actually never considered this with my newest arrays. I have 8x 4tb hgst drives and 8x 8tb wd reds, each in their own raid 6 array on my 1883. They are both showing the current Sata mode as SATA600+NCQ(Depth32) for the drives, so I guess I have it on. No clue how it impacts performance, I will say my sustained crc checks or data transfers on both arrays went from about 350MB/sec to 480MB/sec when I upgraded the system from a Ryzen 1700x to a 5600x. I was surprised the system upgrade made that much of an impact. Integrity checks take about 14 hours for the 4tb disk array and 20 hours for the 8tb ones. Anyway take this as some data points for what a system does with NCQ enabled.
 
Does anyone use Areca raid cards with dell servers? I am wondering if the arc-1882ix-12 counts as internal or external raid card?

I have dell T620 and the card "dissappears" from the system even though its still installed.

Thank you
Bill
 
I have an Areca 1680ix-24 (ARC-1680), Firmware V1.51 2012-07-04, SAS FW 4.5.3.0

I just bought some Seagate Exos X16 16TB 12Gb/s SAS 512e/4Kn ST16000NM002G HDDs.

I had to figure out that the drives wouldn't turn on as my HDD power cable had 3.3V power. I covered the Power Disable pins on the SAS connector and then the drive powered on. After confirming this, I decided to cut the wire supplying 3.3V so my array is not relying on a 3x8mm piece of tape.

Anyway, if anyone was wondering if these, or drives like these work on your 13-year-old SAS HBA, wonder no more. They seem to be working just fine. I have already done a 20-hour test and will be doing a battery of tests over the next week.

I will be configuring a 4-disk RAID-5 with 1 Hot Spare. I figure that's better than 5-disk RAID-6 as it leaves the one drive free.

The purpose is an Emby server (like Plex) on Windows 10. Yes the SAS is overkill but SAS is like $200 cheaper than SATA.

I'll post more info as it comes.

Edit: Grammar xD
 
Last edited:
I have an Areca 1680ix-24 (ARC-1680), Firmware V1.51 2012-07-04, SAS FW 4.5.3.0

I just bought some Seagate Exos X16 16TB 12Gb/s SAS 512e/4Kn ST16000NM002G HDDs.

I have to figure out that the drives wouldn't turn on as my HDD power cable had 3.3V power. I covered the Power Disable pins on the SAS connector and then the drive powered on. After confirming this, I decided to cut the wire supplying 3.3V so my array is not relying on a 3x8mm piece of tape.

Anyway, if anyone was wondering if these, or drives like these work on your 13-year-old SAS HBA works with these drives, wonder no more. They seem to be working just fine. I have already done a 20 test and will be doing a battery of tests over the next week.

I will be configuring a 4-disk RAID-5 with 1 Hot Spare. I figure that's better than 5-disk RAID-6 as it leaves the one drive free.

The purpose is an Emby server (like Plex) on Windows 10. Yes the SAS is overkill but SAS is like $200 cheaper than SATA.

I'll post more info as it comes.
The biggest concern I'd have with 16TB member disks in RAID5 is URE during rebuild!
 
The biggest concern I'd have with 16TB member disks in RAID5 is URE during rebuild!
+1, I moved to RAID6 when I increased my 4tb disk array from 5x to 8x, and use RAID6 on my 8x 8tb disk array. I do plan on retiring the 4tb array in a few years with 16tb drives, but we are quickly approaching the point where you near 100% chance of URE during any rebuild. At least with 10^14 rated disks, nowadays it's 10^15 or go home.
 
Back
Top