I've been using BackupExec and Arcserve with LTO2,3,4 tape libraries (HP SCSI connected and Dell iSCSI and SAS connected) for many years and I have never seen my backups go over 2000MByte/minute on write (read/verify can be up to 5000MByte/minute). We even use multiplexing datastreams from...
Would RAID 50 in this configuration give you a speed bump (compared to RAID 5 or RAID 6) with the added advantage of smaller RAID 5 pairs (so smaller risk)?
I found a company on eBay that is/was selling several old models ("PS100/PS200/PS400" I think) and selling them with 2TB Hitachi SATA disks (and you get the original 250GB disks as well).
The add mentioned they where using a group of 7 or 9 of these for their lab setup, all with 2TB OEM drives...
Thanks for your reply Phoenix42, much appreciated.
Can you also hint about the maximum supported disk sizes? Are there any hardware or firmware limitations or deliberate lockouts as experienced by some of the posters in this thread?
I don't have a test box yet and am not going phase out any of our production machines at the moment.
I'm currently looking at the PS5000XV, PS300E & PS5000E or anything else I can find available with or without disks below ~$5000 used on eBay.
We have a small EqualLogic group in production and I was looking to get the cheapest unit I could find to setup a test environment for firmware and feature testing.
Performance is not an issue but I would like to get to at least several terabytes of usable space so if I could just add some 1,2...
I know this is a long shot, but does anyone know if you can replace the disks in a (older model) EqualLogic unit for larger capacity off-the-shelf disks (either selected consumer disks like Hitachi 7200rpm or the new WD 'RED' drives or enterprise/ RAID compatible disks)?
I'm contemplating...
Have you looked at Sun/Oracle ZFS appliances at all (7xxx models)?
Especially in a setup like yours you can probably get them down to something like 30% of the list price.
In Windows 7 just do a full format, this way it will be completely overwritten with zeroes.
There is no point in doing multiple pass / random data writes on modern drives.
Just some tips on using this controller:
I have now set "Surface Scan Analysis Priority" in the controller settings to "High" (I think this is a new setting since a relative new firmware, maybe since ~6 months or so).
This causes a forced surface scan on every I/O. Since I'm running about...
This might be obvious but SUN/Oracle still sells storage solutions based on ZFS,
http://www.oracle.com/us/products/servers-storage/storage/nas/overview/index.html
If you contact Adaptec support with your current card model, firmware version and RAID setup details they should be able to tell you what new controller models are compatible with your current RAID config.
Simply put, enterprise class SSDs use different memory technology (Single Level Cell versus Multi Level Cell for consumer models) that can guarantee much more write cycles. I don't believe the are any significant (theoretical) performance differences.
But did you use the option "Write image file to disk" or did you just burn the .img file as a file on the CD-R, because there is a significant difference between the two? Although, with ImgBurn you'll probably get a warning if you are trying to burn an .img file as a normal file.
Have you tried...
You need to create a bootable CD-ROM. You can't do that by the methods you described, just copying the file(s) onto a CD-ROM.
Try using the CD writing program ImgBurn (http://www.imgburn.com/). Use the option "Write image file to disk".
This. I have a 256GB M4 that holds OS/Applications and (Steam) games. My pictures, music and video files are on a 2TB internal harddisk.
I had a 80GB Intel SSD first but it was getting a bit too small, especially when installing a lot of games (it should be enough for just the OS and normal...
All major manufacturers have their own free hard disk test and diagnostics software. Just have a look at the support section of you hard drive's manufacturer.
I believe RAID6 will be preferable over RAID5 + hotspare. In case something goes wrong during a RAID5 rebuild to the hotspare (unrecoverable read error), your complete array will still drop.
With RAID6 you will have the extra parity that can save you from a single unrecoverable read error...
Most raid controllers perform very badly without additional cache memory, as do the HP Smart Array controllers.
This is the current performance of my 8x 2TB drive (Seagate ST32000542AS), RAID5 array on a HP Smart Array P410 controller with 512MB BBWC cache:
This is with the array cache set...
I'm running a Windows based home file server with a HP P410, mainly because I had some experience with HP Smart Array controllers already and because the P410 is 100% compatible with the HP SAS expander I intended to add to my system at a later date.
Looking back I would probably have been...
HP's implementation of RAID6 / RAID60 on its 'Smart Array' controllers has an option called "alternate inconsistency repair policy" that can detect and correct errors. Obviously this is on block/sector level, not file system level.
From the Smart Array configuration utility help:
"RAID 6/60...
I use JDigest:
http://code.google.com/p/jdigest/
It integrates nicely in the Windows explorer. Requires Java though.
To calculate checksums for a complete directory structure I also use Total Commander:
http://www.ghisler.com/
You don't even need to buy additional drives*:
-copy your current 10TB of data to 4 of the 12 individual 3TB drives.
-remove all 12 1TB drives.
-create a new RAID6 array with the remaining 8 3TB drives.
-copy the data from the 4 individual drives to the new array.
-add the 4 remaining 3TB drives...
I had the same question when planning my storage server and went with 32KB in the end.
I don't think there a real significant performance difference unless you are doing something very specific with the storage.
The Deskstar or the Ultrastar? Our Equallogic with 1TB SATA drives uses Hitachi as well but they are not the consumer model but a model from the Enterprise range.
I'm not talking about another drive failing during a rebuild but a planned online capacity expansion.
Reading bexamous' post I think it should be possible to have a drive failure without corrupting the whole existing array if indeed the controller copied the blocks to the new disk first and...
I can't get my head around this, maybe it depends on the RAID controller and its implementation of capacity expansion algorithems as well.
When expanding an existing RAID 5 set with one extra disk, what will happen if the extra disk or a disk in the current set fails during the expansion...