What RAID to use for large multimedia storage for HTPC

jcc39

Weaksauce
Joined
Apr 10, 2003
Messages
84
Hey guys, as the title states, I currently have four 250gb Western Digital IDE drives that I plan to put into a HTPC server type application. I can get more drives if needed. Basically the computer will be used to store my DVD's, MP3's and other Multimedia and will stream the content to the other PC's in my house through gigabit lan. I have a fast CPU and plenty of ram for this PC. Raid 0 is out because of no protection, raid 1 is out because to slow. I would like to hear what you guys would suggest as far as RAID 3,4,5, 0/1 etc goes, why, and what size stripping should I use. I have all the hardware I need except for the RAID controller card, what type would you recommend and why, remember this is for an IDE setup. Thanks for any help.
 
RAID-5 is about your only choice. Support for RAID-3 and 4 are extremely limited nowadays. RAID-0+1 will get you awesome fault tolerant performance, but chew up half the space.

RAID-5 gives you the highest capacity fault tolerance, which capacity is by far the most important goal in a media storage application. Performance requirements are not all that high, as you will most likely be servicing one write at a time, and only a handful of concurrent reads. Even a SLED setup would deliver adequate performance, unless you have more clients than I'd imagine, then you're probably in violation of our rules about illegal activities ;)
 
For the controller card...
Try to find a Promise SX4000 or SX6000... or maybe a HighPoint RocketRAID.(least expensive options - more bucks = more options)

Main thing is for it to have a hardware XOR engine.

If this is for a dedicated server that will stream the video over a network - then you might be able to get by with a software RAID-5.



 
RAID5 would likely be your best bet. Possibly RAID10 if you really feel you need the performance and buying a few more HDs isn't an issue.

If this thing is simply serving multimedia hardware raid might not be necessary. I'm seeing your main concerns as being fault tolerance and capacity. Read/Write performance likely won't be a concern. You'd have to be playing a ton of movies at once to really stress the read capabilities of the drives. Writing to the array likely won't happen very often.

As for strip size I'd suggest the largest supported size. If you're serving audio/video files they're likely going to be large in size. Smaller strips typically only help when dealing with many smaller files so you don't waste space.
 
jcc39 said:
...raid 1 is out because to slow.
Read is the same or better than a single disk. Write speeds aren't far behind that either, even if it's a software array. But yeah, RAID5 is the way to go.

What you should do is sell your PATA disks, buy up SATA disks, then get an SATA card. PATA cables are bulky.

Highpoint's 8-port RAID card (SATA though, 1812a) has been nothing short of average. Haven't touched in in thee months and don't plan on it.
 
Hey guys, thanks for the reply, alot of useful information, and more than expected. Here's a little more info about my setup. Basically I am getting a CM Stacker case and going to mount as many harddrives in there as I can. It will sit in the closet next to my router and I will store anything from the backups of my DVD's to MP3's to backups of important documents and files. The main use for this server will be to store my DVD movies which will be played back by a dedicated HTPC with nice vid card and cpu in my home theater room hooked up to my front projector. Other than that there may be one or two more PC's in my bedroom, or office accessing the storage for MP3's or office files. I will also be using the HTPC to backup any new DVD's I purchase to the server, so there will be some writing to the drives but not much.

It looks like the consensus is RAID 5, I found a brief overfiew of it here, let me know if that kind of sums up how it works. Now onto a few more questions. From my previous memory about RAID was that you should only run one drive to each controller port, even though each port it has two (master/slave) channels to run two drives. Otherwise you are essentially eliminating any performance gain of using raid at all. After looking around I see that most of the IDE RAID controller cards I can afford are 2 port/4 channel or 4port/8channel. So if I plan to run my (4) 250gb drives in a RAID 5 setup, will running a 4 port card vs the 2 port card give much of a performance gain for my application? or should I just go with the 2 port and run all four channels?

Software RAID was mentioned, anyone give me some more info or links as to how that works and is setup? I would assume WinXP pro supports this, do I need a mobo that has multiple IDE contollers to make it work? Last questions, since this server will be dedicated just for storage, what is the minimum specs I could use as far as CPU, RAM and Vid card so that there won't be any studdering when other pc's are accessing the data and streaming it over the network? Also will regular 100mbit lan work or do I definately need gigabit? Thanks so much for any help, for those of you that are interested, I will keep you guys updated with pics and what not as I start putting together my system.
 
The technique described here is a violation of the Windows XP EULA, and may not be discussed here. - DL

If you are just planning on throwing drives at a computer case this might be the way to go for you. Your access times are going to be limited by your network and you aren't going to have a butload of users making requests to the server.

So just get a motherboard and plug in an extra IDE controller... that would give you 4 on mobo and 4 on controller...8 drives total. Plus you can always add another controller when you want.



 
I run a software Highpoint RAID5 on my server for streaming video and audio. The processor is <900mhz w/ 512RAM (what I had laying around, was orignally 256) and can stream one HD feed and two movies over my 100Mb switch at the same time. As far as people talking about only running one drive per controller port, if you are looking for read/write time, maybe, but for steaming video and audio, won't make a difference that you can notice. I have a 4 port card and run eight 200gb drives off it. I had a drive fail on Friday night, put a new drive in and after about 40 hours the array was rebuilt. Granted it sucked when I was watching the movie (ended up copying it over to the local machine to finish watching), but I put the new drive in and started the rebuild before I went to bed. Just have to pray that another drive does not fail during your rebuild. I have now had three (all my WD) drives fail in my array. The fuller your array is, the long it takes to rebuild (processor speed also affects it). I have 80gb free on my ~1.3TB array.

Software RAID will put all the workload on your mobo/pro. A hardware RAID will have a processor on it to do all the work. Usually the hardware cards are more expensive and faster. Speed is not really something you need if it is just serving music and movies.

If you are putting all these drives in a case (why, I don't know) leave the sides off and put some fans on them. If it is in your closet, who cares what it looks like. I orignally had a piece of flat stock with a bunch of holes drilled in it holding all my HDDs. Worked fine, looked like crap. Now I built a rack and made things a bit prettier, but the performace on the other end is identical.

PATA, SATA, doesn't matter. Buy the drives that have a good warrantee and are cheapest. Buying them all at once from the same lot/batch could be risky if one of those lots/batches is bad. That is my only real advice.
 
In general I would agree with RAID 5 - but for the fact that you will be stressing your drives a lot more than raid 0+1 or RAID 10 - and if you are talking about 4 drives in a single HTPC system then the heat is going to be an issue and the MTBF will be lowered (even more so since as a HTPC system I would imagine that you would not want lots of noisy fans disturbing that surround sound!.

Raid 5 requires more drive reads and writes than in RAID 10 (i.e. when reading if the CRC for a stripe fails then all drives have to be read to calculate the mssing value, when writing the other drives have to be read and then the parity value calculated and written), which may increase your MTBF.

Rememer that in RAID 5 if two drives fail (a probability that increases as the number of drives increases) you will lose the whole of the array, in RAID 10/01 you can loose half the drives in the array and still not loose the data.

If you are going to go with RAID 5 you are almost better to either have a 5th drive ready for swap out - or to only use 3 of the drives in the array and leave the other as a 'hot spare' so that it is rebuilt straight away. If one fails it is usually a sign one of the others in the way out!.
 
cyberjt said:
In general I would agree with RAID 5 - but for the fact that you will be stressing your drives a lot more than raid 0+1 or RAID 10 - and if you are talking about 4 drives in a single HTPC system then the heat is going to be an issue and the MTBF will be lowered (even more so since as a HTPC system I would imagine that you would not want lots of noisy fans disturbing that surround sound!.

Raid 5 requires more drive reads and writes than in RAID 10 (i.e. when reading if the CRC for a stripe fails then all drives have to be read to calculate the mssing value, when writing the other drives have to be read and then the parity value calculated and written), which may increase your MTBF.

Rememer that in RAID 5 if two drives fail (a probability that increases as the number of drives increases) you will lose the whole of the array, in RAID 10/01 you can loose half the drives in the array and still not loose the data.

If you are going to go with RAID 5 you are almost better to either have a 5th drive ready for swap out - or to only use 3 of the drives in the array and leave the other as a 'hot spare' so that it is rebuilt straight away. If one fails it is usually a sign one of the others in the way out!.
The OP stated the drives will be in a dedicated server box streaming to the HTPC in a different room. Ugly and loud (but cool) is thus a possibility.

I don't think there's a significant reliability advantage to any particular RAID level. If the drives aren't designed to be read from and written to frequently, then they won't make it in today's market ;) However, to the OP: Please stay away from standard desktop WD Caviars in RAID - ask DiscoStu about his luck with WD2500JBs in RAID :eek: As long as he designs proper cooling in the disk enclosure and uses a capable, quality power supply, all of his drives will give years of service. If reliability is a concern, the OP can always pick up Barracudas, MaxLines, or Caviar RE. The makers of these drives stand behind them for five years.

RAID 0+1/10 may be able to sustain two simultaneous failures, as long as the "right" pair of drives fail. After one drive fails, your probability of losing the "wrong" drive and therefore the entire array is 33%, and Murphy's Law makes this 'advantage' untrustworthy.

There are ways to extend the fault tolerance and/or shorten the rebuilds on large RAID-5 arrays, such as RAID-50 and RAID-6. If we was looking to do more than four drives, both of those levels deserve consideration. You also gloss over RAID 10's lower capacity potential ;)
 
Back
Top