Interesting Areca problem not showing all the disks

Ockie

*** Self Proclaimed Storage King ***
Joined
Mar 2, 2005
Messages
18,003
So this is my problem.


I have a 1280ML Areca SATAII RAID Controller. I want to run this controller in JBOD, no problem. However, I'm having an odd problem.


The controller control panel and bios sees all my 1TB GP WD Drives with no problems, I can use the controller tools to identify the drive, power the drive, and monitor the drive (controller says drive is normal).

However, when I use Disk Manager, I can not see all the drives, nor can I see them listed under my computer (obviously), I only see 5 drives listed with the disk manager. I have reseated the controller, the cables, and everything else, including the drives. I assume the connections to be fine as the controller does identify the drives and can show me monitoring infromation on the drives.


The drives contain data on them and have been used prior, we are talking about 24 drives here, so something is defnitley up. I also reflashed the controller with no luck.


Any ideas? Anyone else had a similar problem?


EDIT: I also get a PCI Failure error on the boot (but i can press any key to continue), so I'm wondering if this controller isn't fubared.
 
try hooking a non-showing drive to an internal controller and see if it works. I had 8 WD10EACS spontaneously fail on me.
 
Sure the areca is compatable with your mobo. A pci error usuall indicates some kinds of slot/mobo issue (at least it does with my 1261)
 
Sure the areca is compatable with your mobo. A pci error usuall indicates some kinds of slot/mobo issue (at least it does with my 1261)

I'll move it to a PCIE 1.0 slot and give it a go. It's just interesting because it didn't do this for the first 5-10 boots.
 
try hooking a non-showing drive to an internal controller and see if it works. I had 8 WD10EACS spontaneously fail on me.

I don't believe that 19 drives can fail at once in a 24 drive system.
 
come on, try running them in raid 5/6 (maybe they'll show up), you all know that we want you to :p
 
come on, try running them in raid 5/6 (maybe they'll show up), you all know that we want you to :p

As much as I want do. I have no where to put all this data and I have GP drives which are common for getting kicked out of the array.




I have a theory about this controller and the GP drives, the controller was default on RAID setting with no implimentation, I'm wondering with the infamous TLER error if the RAID card didn't black list the drives before I switched to JBOD. I don't know, I'm starting to grasp for straws here.

I just moved the controller to a diffrent slot, no go, same error, same number of drives showing up. I've been through the entire manual and nothing indicates this type of issue. For those who wants to know what motherboard this is, this is a Supermicro X7DWN+, so I highly doubt there is a compatibility problem (also, others have this same setup working).


I'm thinking it's either the controller or a motherboard issue, but since the other slot was tried too, I'm guessing the controller is fishy.
 
Update:

I reseated everything again, no go.
I pulled the controller, system boots fine with no errors
I pulled all the hard drives and tested with the Areca one by one, areca now doesn't see any of the drives, even with reboots
I pulled all the hard drives and grabed a test bed machine, tested each drive, each drive checks out fine and is working 100%


This leaves me with the controller at issue. Everything else is working fine and have been tested.


This is the weirdest problem. I also tried using the on board controller from the motherboard, but it also does not see the drive (tested one on it), I changed the bios settings and still no go. So I'm quite at a loss here. I'm going to dig around to see if I can find a spare sata controller and toss it in, just to eliminate any software side issues... but what makes it complicated, was the fact that 5 drives did manage to show up and operate... but not all of them.
 
hold the phone, Ockie.. Don't toss that card just yet..

two questions:

1) Was each drive configured in the 1280ML's web gui as a "pass through" drive? As in, Web GUI -> Physical Drives -> Create Pass Through. Note this process does NOT write any data to the disk, so any existing data/partition is intact whether you do "create" or "delete" pass through drive.

Setting the controller to JBOD mode alone doesn't make all the drives suddenly show up in Windows' diskmgmt.msc, AFAIK.

2) Is the controller seated in the first PCIe slot nearest the memory slots? In other words, the PCIe slot you'd normally seat the video card in on a desktop machine.
 
hold the phone, Ockie.. Don't toss that card just yet..

two questions:

1) Was each drive configured in the 1280ML's web gui as a "pass through" drive? As in, Web GUI -> Physical Drives -> Create Pass Through. Note this process does NOT write any data to the disk, so any existing data/partition is intact whether you do "create" or "delete" pass through drive.

Setting the controller to JBOD mode alone doesn't make all the drives suddenly show up in Windows' diskmgmt.msc, AFAIK.

2) Is the controller seated in the first PCIe slot nearest the memory slots? In other words, the PCIe slot you'd normally seat the video card in on a desktop machine.

1) Couldn't create a passthrough drive, it said there was no availible disks. However, it did see all the disks.

I set the controller to JBOD, restarted multiple times, no go. Also, only 5 drives appeared. Once I started moving the "working" drives around, they also dissapeared and even when they are re-inserted to their original slot.

2) I tried all the PCIE slots. Same results.
 
I do think I have found the problem. Incompatibility with the GP drives.

For curiosity, I dumped in my raptors in the system and pulled all the GP drives, system boots just fine, no errors, even pci test passes, and for the first time, shows it (the raptor) in the controller post information upon boot, which it never did before.

Also, I can hot swap and hot plug the drive and it will reappear and is fully functional.


I'm going to guess that there is something up with these GP drives and this controller.
 
1) Couldn't create a passthrough drive, it said there was no availible disks. However, it did see all the disks.

I set the controller to JBOD, restarted multiple times, no go. Also, only 5 drives appeared. Once I started moving the "working" drives around, they also dissapeared and even when they are re-inserted to their original slot.

2) I tried all the PCIE slots. Same results.

Hmm.. are you using the Web GUI or the BIOS (at-boot) interface to try and create pass-through drive? It SHOULD be allowing you to create a pass-through drive with any of the ones that are inserted. When you say it "sees" all the disks, do you mean you see them like in the following pic when you click "Raidset Hierarchy" and scroll down to "IDE channels"? One last question - did you install the STORPORT driver on your Windows 2003 server, or the SCSIPORT driver? You need to be using the STORPORT driver, just FYI.

hddlz1.jpg
 
Hmm.. are you using the Web GUI or the BIOS (at-boot) interface to try and create pass-through drive? It SHOULD be allowing you to create a pass-through drive with any of the ones that are inserted. When you say it "sees" all the disks, do you mean you see them like in the following pic when you click "Raidset Hierarchy" and scroll down to "IDE channels"? One last question - did you install the STORPORT driver on your Windows 2003 server, or the SCSIPORT driver? You need to be using the STORPORT driver, just FYI.

hddlz1.jpg

I see that very same list as you do, I use the webgui tool, but I can also use the bios gui tool... same results.

This is the exact response I get:
"No Disk Available For Pass Through"


EDIT: Appears as if the raptorx is now producing the same results as the GP.



This is what I can see using the webgui or even the bios tool:



Using STORPORT, actually tried both.
 
I wrote once much earlier in your galaxy 5 thread that initially I had probs with the drives being seen - that after hours of trying different things I realized the controller (or cable) seemed to be happiest in terms of seeing all the drives when the cable was inserted into the miniSAS receptacle on the controller firmly, and then backed out just a hair. When they were pressed in there firmly, I would not see some drives in blocks of 4, which made it evident it was an issue on the miniSAS end since 4 drives would be missing.

But you say you're seeing the drives... hmm... do you have any other brands of drives you can try? Lastly perhaps I should just test your controller with my drives.
 
I wrote once much earlier in your galaxy 5 thread that initially I had probs with the drives being seen - that after hours of trying different things I realized the controller (or cable) seemed to be happiest in terms of seeing all the drives when the cable was inserted into the miniSAS receptacle on the controller firmly, and then backed out just a hair. When they were pressed in there firmly, I would not see some drives in blocks of 4, which made it evident it was an issue on the miniSAS end since 4 drives would be missing.

But you say you're seeing the drives... hmm... do you have any other brands of drives you can try? Lastly perhaps I should just test your controller with my drives.

I am thinking of overnighting you this controller, I'll talk to you in PM's about it.

EDIT: I just checked the machine after you made those configration changes, it appears as if the raptor is now functioning like the GP drives should. Now I'm really questioning the GP drive compatibility.

I'm going to call Areca and see if there is some known compatibility issue with these drives.
 
I don't know if the controllers are close to identical, but i'm using 9 WD10EACS with a Areca 1230 in a raid 5 setup so it might be unlikely where is a compatibility problem.

Sounds more like a bad controller.
 
This is what I can see using the webgui or even the bios tool:

It shows you allready got the disk in a raidset.. if the disk is in a raidset you cant use it as pass-thru without breaking up the raid first.

You cut the buttom off that screenshot.. what does it say about Usage on Ch17?
 
It shows you allready got the disk in a raidset.. if the disk is in a raidset you cant use it as pass-thru without breaking up the raid first.

You cut the buttom off that screenshot.. what does it say about Usage on Ch17?

It says JBOD. It automatically places all the drives in that raidset when it's on JBOD and I insert disks.

Update:

To elimnate any problems and to also accomodate a Raid 6 implimentation, I have decided to toss the WD GP 1TB drives. I just overnighted a bunch of Seagate 1TB drives so tomorrow I will know exactly what the problem is.
 
Well, I'm pretty sure if you want to access individual disks you have to turn off automaticly JBOD activation and manually configure the drives as pass-thru.
 
Well, I'm pretty sure if you want to access individual disks you have to turn off automaticly JBOD activation and manually configure the drives as pass-thru.

This is how my Areca controller works anyways.

Remember that if you used a drive with the JBOD function before, the Areca writes that to the drive so even if you change the bios the drive the stay the same when you add it again later. You have to delete the settings on that very drive in the bios to reset it's status, otherwise it will be a JBOD drive whatever Areca controller you plug it into, and you should try to set it to PASSTHRU, not JBOD.
 
I know Arecas do have compatability issues with certain motherboards, but I doubt that is the issue in this case, since there's other members with pretty much identical setups here... I know my ARC-1160ML2 HATED my Tyan K8SR board... it would fail out the array/card whenever I tried to write to it, in both *nix and windows, but everything works beautifully in my H8DM8-2.
 
I know Arecas do have compatability issues with certain motherboards, but I doubt that is the issue in this case, since there's other members with pretty much identical setups here... I know my ARC-1160ML2 HATED my Tyan K8SR board... it would fail out the array/card whenever I tried to write to it, in both *nix and windows, but everything works beautifully in my H8DM8-2.

Yeah, I thought it was a compatibility error too, but odditory amongst many others are running this exact card to motherboard combination, so I don't think this should be the situation.

Thanks for all your tips guys.


I overnighted 5 drives so tomorrow I will know what the deal is. I bought seagate drives this time around, so I will go straight to a Raid6 implimentation and figure this all out.
 
Back
Top