RAID 5 vs RAID 0?

s10010001

Supreme [H]ardness
Joined
Sep 17, 2002
Messages
7,505
How is the preformence of RAID 5 Vs RAID 0?

I have 3-4 80gb drives to paly with now.. and im board...
 
RAID 5 is a tad bit slower because parity checks have to be made and that takes time and resources. RAID 0 is faster because no calculations are really done.

Of course RAID 5 had redundancy while RAID 0 doesn't.
 
raid 5 and 0 are not compairable.

raid 5 - slow speed good redundancy
raid 0 - fast speed no redundancy
 
I(illa Bee said:
How is the preformence of RAID 5 Vs RAID 0?

I have 3-4 80gb drives to paly with now.. and im board...
So you tell us! Try it and see; you don't have data to put on the drives yet, so there's nothing to lose by wiping then a few times. Try "hardware" raid 0 and 5 if it's available, try striping in software, post benchmarks!

 
Raid 5 write speeds can be worse than a single drive also do to the parity checks. Raid 0 just doesn't have the same overhead as raid 5. You really need to consider what it will be used for when comparing the benifits and detriments of both.
 
defakto said:
Raid 5 write speeds can be worse than a single drive also do to the parity checks. Raid 0 just doesn't have the same overhead as raid 5. You really need to consider what it will be used for when comparing the benifits and detriments of both.

You can get better-than-single-drive speeds if you have a good card ($$$$) that does it's own parity checks. Software RAID-5 has terrible write speeds though.
 
Level 0 is going to give you raw transfer rate performance increases. Great for video editing and similar content creation work where transfer rate > seek performance.

Level 5 is the slowest of the raids. Optimally you want a controller with a dedicated processor to perform the xor calculations and a decent amount of cache. A 'cheat' to get better write performance is to enable the write back function, as opposed to write through. The system writes to the controller memory, the controller issues the write complete signal and deals with calculations and then the actual writes to the array.
 
Volkum said:
You can get better-than-single-drive speeds if you have a good card ($$$$) that does it's own parity checks. Software RAID-5 has terrible write speeds though.
This is mostly dependant on the bus the drives are on. Check this out:
Code:
raid5: automatically using best checksumming function: pIII_sse
   pIII_sse  :  1892.000 MB/sec
That's on a dual p3 933. 1.9 GB/s. The array I've got is (I think) currently limited by the slow-but-supported Highpoint 1540 sata card (it's a bridged pata->sata design) I'm using; I get about 38 MB/s writes to the array. There are 3 disks on a pci bus - 133 MB/s total bandwidth. So apparently it's being limited to writing to one disk at a time - single disk benches show about 60 MB/s, so that's not a limit - for about a third of the pci bandwidth each. When I move to my faster 64/66 card (when drivers for it come out) I'll post a big thread about linux software raid and how awesome it is. :p

 
unhappy_mage said:
This is mostly dependant on the bus the drives are on. Check this out:
Code:
raid5: automatically using best checksumming function: pIII_sse
   pIII_sse  :  1892.000 MB/sec
That's on a dual p3 933. 1.9 GB/s. The array I've got is (I think) currently limited by the slow-but-supported Highpoint 1540 sata card (it's a bridged pata->sata design) I'm using; I get about 38 MB/s writes to the array. There are 3 disks on a pci bus - 133 MB/s total bandwidth. So apparently it's being limited to writing to one disk at a time - single disk benches show about 60 MB/s, so that's not a limit - for about a third of the pci bandwidth each. When I move to my faster 64/66 card (when drivers for it come out) I'll post a big thread about linux software raid and how awesome it is. :p

Not just the bus, but the drives, since I read that RAID5 wirting requires to: write, read, checksum, write.
 
I always thought hardware RAID 5 would provide faster read / write then single drives......
 
drizzt81 said:
Not just the bus, but the drives, since I read that RAID5 wirting requires to: write, read, checksum, write.
It depends on how well the raid is implemented. If you're writing the same chunk to all 3 drives, you can just calculate parity and write - whatever's on that block will be overwritten anyways, so it doesn't need to be read off first. However, using filesystems instead of block devices skews benchmarks in Windows' favor; it always takes *some* overhead to write to a filesystem, but that's a penalty that's almost always paid. Disk benchmarks in Windows have long bugged me - they're not testing the thing that matters: "how fast can I put files on this disk?". What they test is "how fast can I put bytes on this disk". Subtle difference, and filesystems make a lot of difference in this.

 
A half decent controllor will allow RAID 5 to pwn a single hard drive solution.

I ran some "real world" raw txfer speed tests by transferring 5GB files with a stop watch on a humble 32 bit pci bus..

Raid-0 hardware -- 65MB/sec sustained write
Promise S150 SX4-M -- 55MB/sec sustained write
RocketRAID 1820A -- 50MB/Sec sustaned write
Single sata -- 45MB/sec write
Single IDE -- 35MB/sec write
Windows software raid 5 - 7MB/sec sustained write <==sux or what.


:D :D :D
 
unhappy_mage said:
This is mostly dependant on the bus the drives are on. Check this out:
...

When I move to my faster 64/66 card (when drivers for it come out) I'll post a big thread about linux software raid and how awesome it is. :p

When you do, I'll be sure to stick it.
 
Yeah, I really want to see some modern Linux RAID benchmarks. All the ones I find online are like five years old.
 
Back
Top