Poor performance with 2x Vertex 3 stripped on RocketRAID 2720SGL

computermod14

Limp Gawd
Joined
Nov 6, 2003
Messages
327
Just picked up the RocketRAID 2720SGL and configured my 2 Vertex 3's in RAID-0. Below are my results...

rji7tg.png


Something isn't right...

The controller shipped with v1.0 firmware on it so I upgraded to v1.5 which is the latest. I have not seen any improvement with the upgrade. I'm also using the latest driver from their site and not from the disc.

Both of the Vertex drives are on the latest firmware as well which happens to be 2.22.

Also when booting Windows, the drive LED on my case does not even start flashing until the Windows logo had pulsed about 4 rotations. With my onboard SATA II ports I get about 500/350 speeds and the LED would start flashing as soon as all of the colors of the logo met together.

Anybody else have any experience with this controller?
 
A couple of things.. It is your system drive, so you could have something interfering with the benchmarks like Windows services or AntiVirus. Second, how big are your V3's? Third, what motherboard and slot is the RocketRaid in (Example, it is a x16 phy x1 electrical slot?). It is not a particularly impressive controller at all and honestly if your motherboard supports R0 (and that is the mode you plan on using) and your motherboard is a recent Intel chipset you might be better served by using your integrated ports for R0 as opposed to the RR. Just because you only have SATA2 doesn't really matter, because in most tasks on an OS drive your average workload is more random/higher queue depth activity which doesn't exceed SATA1, let alone SATA2 speeds/
 
A couple of things.. It is your system drive, so you could have something interfering with the benchmarks like Windows services or AntiVirus. Second, how big are your V3's? Third, what motherboard and slot is the RocketRaid in (Example, it is a x16 phy x1 electrical slot?). It is not a particularly impressive controller at all and honestly if your motherboard supports R0 (and that is the mode you plan on using) and your motherboard is a recent Intel chipset you might be better served by using your integrated ports for R0 as opposed to the RR. Just because you only have SATA2 doesn't really matter, because in most tasks on an OS drive your average workload is more random/higher queue depth activity which doesn't exceed SATA1, let alone SATA2 speeds/

1. It's a clean install with no AV.
2. They are 120GB as in my sig...
3. As in the sig it's a Rampage III Formula and all slots are x16 according to the manual.

I've read a lot of good things about this budget controller and there are many examples in raid 0 configs that prove it so. I've seen multiple 2xssd setups achieving 800-900 read/write. What I have above is pitiful, something is missing...
 
I'd say break the raid. The performance increase is not something you'd notice (unless you REALLY needed the bandwidth). TRIM support is more important and you should get better 4k reads on a single drive. Also remember that larger SSDs have faster speeds than smaller drives.
 
1. It's a clean install with no AV.
2. They are 120GB as in my sig...
3. As in the sig it's a Rampage III Formula and all slots are x16 according to the manual.

I've read a lot of good things about this budget controller and there are many examples in raid 0 configs that prove it so. I've seen multiple 2xssd setups achieving 800-900 read/write. What I have above is pitiful, something is missing...

Your board doesn't clock the x16 slots any lower than x8 with multiple cards, so it is not a lane bandwidth problem. I agree with the other poster that it is not worth raiding the 2 ssds, you will see almost zero to zero perceptible difference in general use as an OS drive.
 
I agree with the other poster that it is not worth raiding the 2 ssds, you will see almost zero to zero perceptible difference in general use as an OS drive.
And I disagree. LOL!

OP, I'm not sure what you were expecting or how your drives performed as singles.

It seems your card gets decent reviews so the only thing I can think is maybe a driver update?

TRIM supposrt is no biggie unless you're using 85% of capacity.

I'd do a little ASSSD testing on the single drives and see what percentage increase you got with dual drives.
 
I'm curious as to what the goal is for the OP? Are you going for the highest benchmarks? I'm not sure why you would go away from the on board SATA2 raid controller over the add in board?
 
It is probably the same as the majority of the people. MADZ BENCHMARKZ NUMBERZ! For the additional point of failure added the real-world benefits are much less pronounced and what I have found are barely noticeable in 95% of circumstances.
 
SSDs on a third party controller need a controller designed with SSDs in mind, which are not common (and not cheap).

Even if you were able to get 8-900MB/s bandwidth it wouldn't mean you would be getting good performance in practice, because SSDs aren't about bandwidth but response time, and your controller will get in the way of good response times.
 
I got this card with the goal of doing a RAID 5 with with 5 120 GB OCZ Agility 3 drives. I first tried setting it to write through and got better result ( 300 MB avg read) than write back ( 120 avg read) and neither of these are anywhere near the single drives read/write speed. I am not sure how much of a hit the raid 5 will give in performance but I wanted something that would balance speed with a fault tolerance and so far i only have 1. Part of the issue is the drives are only detected as 300 MB/s so i suspect the cables are to blame ( the cables really need to list the max supported speed when purchasing).
 
Part of the issue is the drives are only detected as 300 MB/s so i suspect the cables are to blame ( the cables really need to list the max supported speed when purchasing).
Let me assure you....cables are NOT your problem. ;)
 
Well, even though I don't think much of highpoint controllers in general, other people are reporting much better results with your card. Either you have a bad card, bad cableset or some other issue in the system. Try changing the cableset first and go from there.
 
so finally got new cables, now they recognize the drives as sata 600 . still getting poor performance in raid 5.
HMDEv.png


and
L4TJu.png


fw is 1.2 on card and the drives are actually agility 3s , this is no where close to what I would expect out of them though.
 
this is no where close to what I would expect out of them though.

I would look for real raid card (meaning LSI or Areca with a BBU) if you want to use raid 5. The last CDM benchmark is abysmal. It looks like you have a software raid 5 of USB sticks on a Pentium 4.
 
Last edited:
so finally got new cables, now they recognize the drives as sata 600 .
My tired eyes must be missing it.

Where do you see SATA 600 being recognized?
 
That's the wrong card to be using for RAID-5.....

Manufacturer should be strung up for even advertising it will do it tbh.
 
Everything I've ever been told is that if you're spending less than say, $400-500 on a RAID card (thinking LSI 9260-4i with fastpath) putting your solid state drives in RAID doesn't net you any performance.

In my experience a RAID card with that little PCB is more or less a software or fake RAID implementation and passes the processing off to the CPU.

I'm sure even on the intel ports onboard you will get better speeds.
 
The bios lists for raid card lists their speed.

ok think i made an improvement

- PCI_E5 supports PCIE 1.1 x4 speed only found that on my mobo, so moved it to one of the PCIE v2 x16 slots that should support the card and my speeds are alot better. (see http://www.msi.com/product/mb/P55-GD80.html#/?div=Detail for PCIe detail)

I should note this is using write through, because write back did not seem as reliable.

wkrB9.png

sZoNz.png



The question is what boost would i see from a sub 500 dollar card that supports up to 8 sata 3 drives with raid 5 ( i currently have 5 ssds )

I did a raid 0 just for fun.
ZL3zk.png
 
Last edited:
Well, that will make a difference. Back in post #2 I asked and you said it was in a x16. In any case, worst case scenario if you are using any of the x16 slots it should never have clocked lower than x8. With your card, there would be no performance difference whether the slot was x16 or x8. It is possible one of the other slots has an IRQ issue, or you have a bad slot.
 
Last edited:
Well, that will make a difference. Back in post #2 I asked and you said it was in a x16. In any case, worst case scenario if you are using any of the x16 slots it should never have clocked lower than x8. With your card, there would be no performance difference whether the slot was x16 or x8. It is possible one of the other slots has an IRQ issue, or you have a bad slot.

I think your reply was to OP, where as shilohj is a different person with a different setup. I started to reply yesterday, but when I realized it was 2 separate people, I ended up not.
 
I think your reply was to OP, where as shilohj is a different person with a different setup. I started to reply yesterday, but when I realized it was 2 separate people, I ended up not.

I got confused by that myself.
 
Sorry, wasn't trying to thread jack or confuse anyone, just thought it would be better to have two posts about the same card in the same thread.
 
I think your reply was to OP, where as shilohj is a different person with a different setup. I started to reply yesterday, but when I realized it was 2 separate people, I ended up not.

Ah. *puts on Rosanna Rosannadanna voice* Never Mind.
 
Last edited:
I'd say break the raid. The performance increase is not something you'd notice (unless you REALLY needed the bandwidth). TRIM support is more important and you should get better 4k reads on a single drive. Also remember that larger SSDs have faster speeds than smaller drives.

I have wondered why people are bothering with raid0 on ssd's. The only real purpose I can see is to get a bigger sized drive as one might prefer JOBD as opposed to 2 split equally. Not to mention raid0 doubles the chance of failure.
 
disk0.png


I think they might be right about the TRIM. This is three disks on the same card as yours after four or five months. It used to be a constant 1400 but a few points in the test there it slips below 1000. I don't really care though, the system still mops the floor with my other PC that only has one SSD.

As to your problem I would say it looks like you are using the wrong driver for the card? Its only a guess but I had similar issues with the card as the manual sucks. After I got the latest drivers of their web site it ran really well though.

As I said its been running for four to five months now and I was considering wiping the drives to get the performance back but then I thought screw it I'm not that geeky I'll just get another drive and throw that it and that will do the same thing. They are only $80 now aren't they?
 
Back
Top