Build-Log: 100TB Home Media Server

@treadstone

If you're looking for a plain-Jane SAS/SATA card that can work with an expander try the SuperMicro AOC-SASLP-MV8. There's an owners thread within this Data Storage Systems forum.
 
Hey treadstone can you setup your system with 3x 16 drive raid 0 and use windows software raid 0 on those to achieve 100tb raid 0 array and please benchmark that and report the results. I am curious to see what kind of sequential speed is possible on such setup and it would be a kick to see the results. I bet your would be limited by the board bus or nothbridge/southbridge links.

Btw nice project. Personally for a long term solution I feel you need to change your strategy.
I would setup the server and a pure san system with a low power cpu and a san operating system installed like openfiler or such and setup a iscsi target.

Then I would have a seperate server running your windows shares or streaming/encoding software that connects to the iscsi target on the 100tb server. I would personally use virtualization aswell so you could have multiple servers utilizing this storage or from a redundancy / backup point of view. Meaning if your virtualization platform went down you can restore the vm to another workstation and connect to the iscsi and keep going. I think raid 60 would be a good option why have you decided against this?

Good luck with your project it will turn out awesome!
 
If you are looking for Raid information, check this out:

http://blog.kj.stillabower.net/?p=93

Last summer I used a Core 2 Quad to Monte Carlo simulate the reliability of different Raid configurations 1,000,000 times each. It took about 2 weeks. If you are going to use Raid, RAID6 is the only way to go.

3fail.png
 
Just FYI this exists:

mhddfs under Linux. Its sorta like a simple version of WHS. You can have 48 separate drives, each with their own file system, all mhddfs is 'merge' all the directory trees into one.

So if each drive has a 'movies' folder, when you looked at the merged view you'll see 1 movies folder with all the movies in it.

When you write to the movies folder, a new file will be put on the first drive that has enough space. Files are not split across drives at all. You can always add/remove drives. If the whole thing blows up in your face, each drive works just fine on its own. Technically you could probably format all the drives with NTFS and could dual boot with Windows... so if you booted to windows you'd have 48 separate drives, reboot to linux and you have 48 seperate drives and a 49th drive that is all them merged together.

Downside to it? Its Linux, and Linux sometimes blows with HW support... not sure what controller you have. Also mhddfs is a fuse fs, so it probably has some performance issues, but for network storage I doubt its something you would notice.
 
Last edited:
If you are looking for Raid information, check this out:

http://blog.kj.stillabower.net/?p=93

Last summer I used a Core 2 Quad to Monte Carlo simulate the reliability of different Raid configurations 1,000,000 times each. It took about 2 weeks. If you are going to use Raid, RAID6 is the only way to go.

3fail.png

I was thinking of doing something similar. I probably will at some point, but really cool work.
 
You can also use linux with aufs2, in order to merge the contents of your drives into one virtual drive. In fact, I have been using it for a long time and it works just fine. There probably is something like that for windows too, or maybe you can use libraries
 
Yes they can be writable, and it works quite intelligently. For instance, if I have a movie in disk 3, and i go to its "union" folder and copy a subtitle to it, the file gets written on the same disk as the movie. I generally avoid copying big files to the union folder and prefer to write to the individual drives as that feels safer, but until now everything has worked as I expected it to work.

PS. Every disk has the same root directories, (Movies, Music, Cartoon, Software ...), and when i rip a new movie I just write it to the first disk that has enough free space.
 
Thanks guys for the suggestions. Also, I apologize for the lack of updates. I've been incredibly busy with sooooo many other things that I had very little time to actually work on the server. Anyway, I think I have found the solution I was looking for:

FlexRAID

I had looked at mhddfs as a possible solution to the single volume view of the 96TB I wanted but after installing Ubuntu Server and trying to setup the mhddfs, I decided that that wasn't really what I wanted. Especially considering that I am not too familiar with Linux and would have to spend a lot of time I don't have to learn how to use and configured Linux properly!

I did a lot of searching and reading on how else I could implement the single volume view in Windows. That's when I stumbled across FlexRAID. I basically did a search on Google for a mhddfs equivalent windows driver and found some old posts from 2008 where some guy in another forum was mentioning that he was working on incorporating similar features of the mhddfs in a windows driver. Unfortunately, he didn't make it clear that he was incorporating this in FlexRAID. It took me a while longer to find some updated posts and links to his website where I finally found some answers to most of my questions.

So I blew away the freshly installed Ubuntu server and decided to go back and re-install Windows Server 2008 R2 again. After installing and setting up FlexRAID and FlexRAID-View, I had everything I wanted... that is until I got to the network sharing of the FleaxRAID-View folder!

Apparently there is still some issues with any windows OS higher than XP. In XP you can access the server over the network and see the shared FlexRAID-View folder exactly the way I wanted it, but as soon as I try to access the same shared folder from a Vista or W7 machine, I get a network permission problem screen and can't access the share :(

I posted this issue on the authors forum, but to date he has not responded... :(

He had released some patches for this issue but they don't really seem to resolve this problem.

Also, since I switched the Areca ARC-1680i controller into JBOD mode due to the fact that I will no longer be running my storage pool in any hardware RAID mode, I have to disable TLER on all of my 52 WD20EADS drives again. So I that's what I am doing while I am posting this...2 at a time... whoo hoo what fun... :)

The nice thing about FlexRAID is that I can either run the 'snapshot' type parity generation after I added a movie to my collection or on a scheduled basis. Since the data in the storage pool doesn't change that often (only when I add a movie or a music album), this is the perfect setup for me. It keeps the power consumption to a minimum (since there is usually only a single drive in use) and still provides me with some sort of data redundancy. I tested the data recovery via FlexRAID by copying a movie into the FlexRAID-View folder, then running the parity generation and then deleting the 22GB ISO image I had just copied. Then I ran the FlexRAID recovery and a few minutes later, the 22GB ISO image was recovered!... Perfect :)
 
Yeah Linux isn't always the easiest to deal with, and mhddfs isn't all that robust to make it worth dealing with.

BTW the FlexRAID guy shows up on these forums every so often. Not sure how many here use it tho. Most often I see it being used by people on avsforum.
 
I know he pops in here every now and then. I've seen his posts on avsforum and on this forum. From the looks of it, he hasn't been online or checked his messages since the 19th of last month! You would think that he would check his own forum a bit more frequently... Hope nothing serious (as in bad) happened to him.

Anyway, FlexRAID works like a charm exactly the way I wanted the system to be setup. The only minor thing now that needs to be worked out is the network sharing issue for windows 7. Once that is fixed I''m going to be really happy...
 
Currently I think I average about 25 Blu-rays (complete discs stored as .ISO images) per TB.

So this should give me room for about 2250+ movies... :)

Just curious. What device/player, do you stream the BD.ISO has DTS-HD or TrueHD audio to?

TIA
 
Just curious. What device/player, do you stream the BD.ISO has DTS-HD or TrueHD audio to?

TIA

I use my family room HTPC. I build this HTPC back at the beginning of 2008. It's housed in an Origen AE S21T enclosure in Black:

http://www.origenae.co.kr/en/htpc_s21t.htm

I recently changed the video card to an AMD/ATI one (sorry forgot the model number, I will check on it later when I'm back home) that has the ability to stream DTS-HD and Dolby True-HD via HDMI to the Denon AVR-4310CI.

I have 9 WD10EACS 1TB drives inside the S21T but only 7 of them are actually hooked up to the motherboard. There aren't enough ports on the motherboard to hook them all up at the same time. The motherboard has 8 SATA ports. I used one for the LG Blu-ray/HDDVD drive and the remaining 7 for the HDDs. The HDDs are all full with Blu-ray movies. Once I have completed moving all of the ISO images off the HTPC onto the server, I will use the HDDs in the HTPC to store recorded HDTV shows and movie trailers on them.

For Blu-ray playback, I use Cyberlink Power-DVD.

Unfortunately, the Oppo BDP-83 doesn't allow for ISO Blu-ray movies to be played back... that would be nice...
 
Interesting... I was on a quest to build over 100TB as well... we have similar build ..
But it seem like you choose consumer parts instead of enterprise parts instead.. :D

For raid, I was going for one Areca raid card and SAS expander route, but after talking to Areca support, I was convinced that two raid card , two processor would be a lot faster than having only one raid card, therefore I went with dual areca raid 1680 with 4gb cache.

I am not sure whether I should go with raid 6 or raid 60 now, I am going to benchmark the speed difference, I am pretty sure raid 60 would be faster, but storage wise, I want something that's solid..

Here is my space after raid.
Raid 6: 40TB per raid card, so total 80TB
Raid 60: 36TB per raid card, so total: 72TB

4 global hot spare in this setup. 2 per raid card.


So my quest to get over 100tb wasn' quit there yet.. but I have two of this puppy.. not sure whether I should just expand my space with two, or do a Active-Passive setup with it, to ensure data is always there..

And yea.. I will have two volumes.. I wish i can merge into one.. going to play with Starwind once raid finished building .. been 30 hours so far and progress is only at 52%..



Here is my spec..

Chenbro RM91250
Intel Server motherboard S5520HC
Intel Xeon E5520 x 2
Kingston PC3-8500 ECC 6GB per CPU
Intel X25-M 160GB Solid-State for OS
100x Hitatchi UltraStar 2GB HD
Areca 1680ix-24 4GB ram x 2
Areca Backup Battery Module
Intel Quad-Pro NIC
Intel Remote Mangement Module


I was going to use the WD green hard drive.. but it is just not realible in this huge data environment.. out of the 3 drives.. I end up getting Hitachi UltraStar.. since that's what EqualLogic is using inside their box.

qIpzml.jpg


Putting HDs one by one into the tray..
9O7pwl.jpg


Intel series
mombIl.jpg



Areca Raid card with 4gb cache
raWDol.jpg



View of the motherboard
hzVLfl.jpg



Close view of dual Areca raid card
MCgcWl.jpg
 
Now that wasn't cheap. Looks like you spent about $35-40k on everything...which I have to say is pretty bloody high. Looks good though. You should rotate one of the heatsinks on the motherboard however. The fan is facing inwards instead of towards the rear. If you're going all out, why the X25-M instead of X25-E? As for the consumer vs enterprise thing, the other Chenbro build only cost ~1/3rd of what yours did (if you paid US retail prices) and we like to save money. :p
 
He can't rotate that heat-sink 'cuz the fan attached to it will hit the memory (that heat-sink doesn't look to have much clearance at the bottom). He'll have to figure out how to flip the fan attached to it and hope that 'pulling' the airflow through it is efficient enough. Should be OK that way since he's using a tower-cooler on a relatively low power Xeon - not exactly a challenge to cool the x5520 at stock speed/voltage.

I would be more concerned about cooling the 1680ix. They get pretty hot under load and one of them doesn't seem to have much airflow possible. The hottest part of it is trapped behind the other Raid card and the BBU.

I'd also suggest he get some fans on the rear grills to pull heat from the area around the CPU. Don't know about the other fans he has in the case - its possible he already has enough positive pressure to just push it out - but rear fans behind the CPUs will help ensure that he doesn't get any unfortunate dead-air pockets. They would also lower the overall air pressure in the case and help the fans pulling air over the drive bays. I'm not guessing silence is a priority for this build ;).
 
That case has more than adequate cooling for pretty much anything you could stuff inside it.
 
That case has more than adequate cooling for pretty much anything you could stuff inside it.

Fair point, but if you look at his pictures of how the Areca cards lay out, the main heatsink only has about 1 inch of airspace in front of it before you get to the (very hot) back of the other card. Because of the BBU there is no real possibility for airflow out the back through the holes in the mounting bracket. And because he drapes the 8087 cables over the top using right-angle connectors he blocks the top too. He at least needs to monitor the temps on that raid card and see how it handles it under load - the Intel IOP does not handle overtemp very gracefully at all and you put the entire array at risk if it overheats.

IIRC, the 1680ix has a small fan over the IOP heatsink. That might be enough.
 
Now that wasn't cheap. Looks like you spent about $35-40k on everything...which I have to say is pretty bloody high. Looks good though. You should rotate one of the heatsinks on the motherboard however. The fan is facing inwards instead of towards the rear. If you're going all out, why the X25-M instead of X25-E? As for the consumer vs enterprise thing, the other Chenbro build only cost ~1/3rd of what yours did (if you paid US retail prices) and we like to save money. :p

X25-M vs X25-E.. to be honest with you.. I completely missed the E series :mad:
My mind was so blind out by the mainstream one, I dind 't see any other one

I was thinking about using building it cheaper, but then I thought if I am going to build something, might as well build it right the first time..

About the fan.. that's the part I was wondering too, but I set it up according to the intel's air flow chart.. I thought all of them should have the same airflow , but I guess Intel doesn't think so..


He can't rotate that heat-sink 'cuz the fan attached to it will hit the memory (that heat-sink doesn't look to have much clearance at the bottom). He'll have to figure out how to flip the fan attached to it and hope that 'pulling' the airflow through it is efficient enough. Should be OK that way since he's using a tower-cooler on a relatively low power Xeon - not exactly a challenge to cool the x5520 at stock speed/voltage.

I would be more concerned about cooling the 1680ix. They get pretty hot under load and one of them doesn't seem to have much airflow possible. The hottest part of it is trapped behind the other Raid card and the BBU.

I'd also suggest he get some fans on the rear grills to pull heat from the area around the CPU. Don't know about the other fans he has in the case - its possible he already has enough positive pressure to just push it out - but rear fans behind the CPUs will help ensure that he doesn't get any unfortunate dead-air pockets. I'm not guessing silence is a priority for this build ;).

Haha, silence.. you got to be kidding me :p I can barely hear myself now.. all the fan like the OP said - it's running at full RPM..

UaUrql.jpg


If you look at one of the picture, you will see there are 4 fans blowing some crazy air to the entire motherboard. If I put my hand around CPU and raid card, I can feel tons of air moving through it. The only bad part I can see from my build right now is the areca battery unit, it is kinda blocking the out-flow of air , since it is not mesh, the air sort of just got blocked there..

WwzAjl.jpg
 
Fair point, but if you look at his pictures of how the Areca cards lay out, the main heatsink only has about 1 inch of airspace in front of it before you get to the (very hot) back of the other card. Because of the BBU there is no real possibility for airflow out the back through the holes in the mounting bracket. And because he drapes the 8087 cables over the top using right-angle connectors he blocks the top too. He at least needs to monitor the temps on that raid card and see how it handles it under load - the Intel IOP does not handle overtemp very gracefully at all and you put the entire array at risk if it overheats.

IIRC, the 1680ix has a small fan over the IOP heatsink. That might be enough.


It is a valid concern, I had it space out more before, and the last PCI-Express slot was an x4 instead of x8, therefore I have to move it around. But I think I can do

1. areca raid card
2. BBU
3. Intel Quad NIC
4.areca raid card
5. BBU

Maybe this way it will space out thing a bit more
 
You must really have a lot of money floating around. ~$40k on hardware certainly is quite a lot. Kinda out of the realm of personal use.
 
Maybe I will do some modification to the BBU bracket, drill some holes to make it mesh
 
Maybe I will do some modification to the BBU bracket, drill some holes to make it mesh
I don't think you'll need to to be honest. I didn't with my 1680ix-24 when the BBU was in the next slot over. The card still has some holes on the rear bracket.
I am building it for my personal and my home business use :D
Hope the investment pays for itself. Otherwise all that enterprise hardware is kinda wasted. There's a reason we've been going with the consumer drives instead of enterprise ones.
 
I don't think you'll need to to be honest. I didn't with my 1680ix-24 when the BBU was in the next slot over. The card still has some holes on the rear bracket.

Hope the investment pays for itself. Otherwise all that enterprise hardware is kinda wasted. There's a reason we've been going with the consumer drives instead of enterprise ones.


True.. I hope so.. I am expanding my business hence the upgrade with more space..
Thanks !
 
I don't think you'll need to to be honest. I didn't with my 1680ix-24 when the BBU was in the next slot over. The card still has some holes on the rear bracket.

The BBU by itself is not the problem. Its the combination of the BBU and the large/hot raid card in the next slot and the cables draping over the top that creates a dead-air risk on the Areca IOP heatsink. He's got a lot of money invested here. Presumably the data he's dealing with is worth a lot more than that hardware (or he's wasted his time & money). Just seems to be worth a bit of effort to make sure he doesn't put it all at risk.

I don't think drilling holes in the BBU bracket will do much good. Spacing the cards out would be best (if the PCIe layout of his MB can support it). Relocating that BBU would be a good idea too - though that means making up a custom cable from the BBU to the card. Not too hard, really. Then you could use a slotted blank where the BBU used to be, which is probably much easier than drilling holes in the BBU bracket.
 
The BBU by itself is not the problem. Its the combination of the BBU and the large/hot raid card in the next slot and the cables draping over the top that creates a dead-air risk on the Areca IOP heatsink. He's got a lot of money invested here. Presumably the data he's dealing with is worth a lot more than that hardware (or he's wasted his time & money). Just seems to be worth a bit of effort to make sure he doesn't put it all at risk.

I don't think drilling holes in the BBU bracket will do much good. Spacing the cards out would be best (if the PCIe layout of his MB can support it). Relocating that BBU would be a good idea too - though that means making up a custom cable from the BBU to the card. Not too hard, really. Then you could use a slotted blank where the BBU used to be, which is probably much easier than drilling holes in the BBU bracket.

Yes, the data is worth more than the hardware , that I know for sure :p

I will space it out when it's done with the raid setup, I will post picture back.
 
Awesome setup. A bit out of my price range but nonetheless some fun eye candy!
 
@oxyi: Awsome setup!

A couple of observations I have from my own experience with the Chenbro 91250:

Internal fan tray: Remove the fans from the their plastic housing and mount them directly to the bracket. This will increase the air flow and CONSIDERABLY reduce the amount of noise the fans make. The plastic housing Chenbro designed creates a vortex and some really bad and high pitched harmonics around the fan blades.

Also, you mentioned that those four fans blow a lot of air over the motherboard, that is true as long as you keep the lid of the server. Just to show you what I mean, take the lid off, hold your hand at the back of the server where the exhaust holes are (above the motherboards I/O bracket). You can feel quite a bit of air flowing. Lower the lid to close the server and you will notice a HUGE drop in air flow. This is due to a couple of things:

1) there are a total of 10 fans in the chassis. 8 x 80mm and 2 x 120mm fans. They push different amounts of air and this causes a problem. I don't think that Chenbro actually did any air flow studies on this chassis as it is BADLY designed! The reason they need to be high power fans is because the air intake is basically the small amount of space around each drive. To be able to pull any kind of reasonably amount of air across the drives to efficiently cool them requires a LOT of force. Those 10 fans gobble up over 110W alone (and even more so when the lid is closed and all drives are mounted)!

2) the internal fan tray does not actually move a lot of air over the motherboard due to it's location. When the lid is closed, fresh (hopefully cool) air does not actually get pulled over the drives into the chassis by the 4 internal fans but rather the air circulates around them. The majority of air that the 4 fans push through them ends up returning back to the intake via the space below the bracket! This is also partly due to the amount of air flow that the remaining 6 fans at the bottom of the chassis create. The total amount of air those 6 fans move creates a reverse air flow over the motherboard (against the air flow of the 4 internal fans)! If you disconnect the power cable to the internal 4 fans, close the chassis and hold your hand in the same location above the motherboard I/O at the back of the server, you will actually feel the air flowing into the server as the 6 fans below will pull the air through the least resistance path which in this case will be the exhaust air holes of the motherboard. Next to no air will actually be drawn over the drives! To combat this problem and also to test my theory and to increase the air flow over the drives I added a barrier between the back edge of the motherboard chassis and the bottom edge of the fan tray. This definitely increased the amount of air being drawn across the drives as I can now put a piece of paper in-front of the drives and it will actually stay there! The amount of air being moved across the motherboard is still not the same as when the fans are running in free air (lid removed) since the 4 internal fans are trying their best to keep up with the 6 fans at the bottom of the chassis. To get better air movement, the 4 internal fans would have to move an equivalent amount of air as those at the bottom of the chassis. This would be the case if there would only be the 4 x 80mm fans without the help of the 2 x 120mm fans. My next step will be to add another barrier between the back planes and the bottom edge of the internal fan tray. Sort of to dedicate the 4 fans to the top 3 or 4 rows of drives. That way the fans don't have to work against each other!

Sorry for the long explanation, I just hope this helps a fellow 91250 user :)

I'm still working on optimizing the drive usage (read overall power consumption).

Interestingly enough, I went and bought an Intel X25-M yesterday too and put it inside the server as a boot drive :)

If you don't mind me asking, what are you going to use this server for (since you mentioned it is for your business)?
 
wow, yap it is a long read, but very useful, thank you.

Do you know what revsion is you RM91250 ? There should be a sticker inside the case that says that. Mine is rev. C.

The reason I asked is because by looking at your picture, I noticed the backplan location for the SAS cable is different, yours is to the right side, while mine is at middle. I am not sure what else is changed.

About the internal fans, do you mean by removing the blue piece that's around the fan, or the entire plastic structure and just mount only the fan to the metal bracket ?

My 4 x 80mm fan is blowing air out, while the 2 x 120mm fan is sucking air in, is that the same for you ?

I needed the space for my "video" business :p
 
oh about the intel x25-m, you able to mount it to the top tray?

I just bought an icydock and mount the ssd and left it there, what about you ?

Beside the x25-m, I was using the new wd velociraptor 600gb as my OS..that fits well.. but SSD just blows it away..
 
Mine is Rev. A

I do know that Chenbro changed the backplanes a bit and moved the SFF8087 connectors to the middle of the enclosure, which makes more sense from a signal integrity point of view, other than that, if I recall correctly, there isn't much of a difference between the different revisions.

About the fans, I have pictures of the modified internal tray, I just need to get around to complete the next round of write-ups :)

Anyway, the internal fan tray consists of two brackets held together by rubber supports to isolate possible vibrations. I took the fans right out of their blue plastic shrouds and removed the entire second bracket and mounted the fans directly to the bracket that mounts inside the the chassis. You also need to remove the power connectors from the white plastic housings. They should be fairly easy to remove, just be careful you don't break the little tabs just in case you want/need to put it back into it's original configuration. You will end up with some spare parts that include the blue shrouds and the second bracket with the white cages attached to it. As I mentioned before, it made a huge difference in terms of noise level. Just to see what I mean, open the case and either pull all four fans out of the white cages or simply disconnect the power to the four internal fans, silence the alarm by pressing the mute button on the front and close the lid. The high pitched noise should be gone and you are left with the noise of the remaining fans. The noise with the 4 fans mounted directly to the bracket will be a bit louder, but the pitch will be the same. I found the noise the shrouds create really annoying!

All the fans in my chassis blow the air out the back. I find it a little strange that on yours the two 120mm fans suck air in. Wonder if this is either an assembly mistake or if Chenbro changed this in your revision of the chassis... Maybe I should send them an email and ask them about this...

Here is an image of one of the 120mm fans out of my chassis, does yours have the same model fan and is it mounted the same way:

This is a Delta AFB1212SHE-F00 120x120x38mm 4,100RPM 190.5CFM 15.0W 55.5dBA fan with TACH output signal, but unfortunately no PWM control input.

1017chenbrorm91250120mm.jpg


I haven't mounted my X25-M yet. I just put it inside and hooked it up to the motherboard for testing to see if I want to use it in this way. I think now that I had it running for a few days that I will keep it in this configuration, so I think I am either going to make a bracket for it or just modify a spot somewhere inside that would allow me to mount the drive to it. Basically just drill four holes into a piece of metal that is part of the motherboard tray to mount the drive directly to it. I was originally using one of the two drives that are connected to the motherboard directly and partitioned the drive into two volumes, one for the OS (about 60GB) and the remaining 1.94TB for music file storage. Now since I have a dedicated OS drive in the system, I am using the full 2TB for my music files.
 
If you don't want to make a bracket or drill the sheet metal, you might just try using some Velcro tape. The SSD hardly weighs anything and good Velcro should hold it even with all the fan vibration. I've mounted SSDs this way several times and never had a problem with it.
 
Why only 4 measly gigs of ram? If your spending this much money, you might as well do it right with ZFS + huge chunk of ram.
 
Last edited:
Question:

How are you going to get 90+TB out of 50 x 2TB drives? The only way you could do that if you just do something like one volume. How you going to back that up?

Why only 4 measly gigs of ram? If your spending this much money, you might as well do it right with ZFS + huge chunk of ram.

Also windows? Again, why not ZFS? It would work a lot better in so many ways.
ZFS is not the second coming like everyone seems to portray it to be. If it was so magical and perfect, it would find its way into enterprise use, Sun notwithstanding. As for getting that much space, there are plenty of ways. 5 x 10 drive RAID 3/5 arrays, 2 x 25 drive RAID 6 arrays, and so forth would be examples.
 
ZFS is not the second coming like everyone seems to portray it to be. If it was so magical and perfect, it would find its way into enterprise use, Sun notwithstanding. As for getting that much space, there are plenty of ways. 5 x 10 drive RAID 3/5 arrays, 2 x 25 drive RAID 6 arrays, and so forth would be examples.

ZFS is already in use in the enterprise and NetApps file system is very similar to ZFS (hence all the lawsuits back and forth between the two) and also used in the enterprise.

Its a shame that not many things support raid level 4 which is what I think you meant by your level 3 comment. Level 3 is byte level parity which doesn't map well to harddrives. level 4 is basically raid 5 but instead of distributed parity, it puts all the parity on one disk.

For media/general storage reliability, what a lot of people actually need is a simplified raid 4 which doesn't stripe the data which unfortunately isn't found in most options outside of things like unraid. I'm surprised though that various companies haven't yet offered it as an option as it should be fairly easy to do.
 
I meant RAID 3. It is supported by the controller he has (unlike RAID 4) and I was using it more for demonstration than anything as like RAID 5, you would only lose a single drive to parity data. I have seldom seen it used.
 
Back
Top