Best 2tb Raid5 disk

denkan

n00b
Joined
Mar 7, 2011
Messages
19
Hello all!:)
I am going to buy 5 new 2tb hard drives for a raid 5 setup. I am going to run the raid on my motherboard (ich10r). I have been thinking of buying Samsung Eco green f4 because I have heard that many people says it's good but I don’t really know so what 2tb hard drive do you guys think is best for a raid 5?

Tanks in advance//denkan
 
i have 8 of the samsung hd204ui in a raid 6 (lsi 9260-8i) with no issues so far. just make sure you update the firmware on ALL of them unless samsung has release the drives with updated firmware.
i cant give any input on other drives as i dont have any of the other options
 
"Best" means what to you, specifically?

Seems to me that, if you're using software RAID on your motherboard, the choice of drive is overwhelmed when considering performance.
 
i have 8 of the samsung hd204ui in a raid 6 (lsi 9260-8i) with no issues so far. just make sure you update the firmware on ALL of them unless samsung has release the drives with updated firmware.
i cant give any input on other drives as i dont have any of the other options

Okey I have actually checked with the store where I am buying from and they can confirm that they are updated by checking the manufactured date :) and of course I mean hd204ui sorry for not writing that. Then I can maybe ask you if they are loud?
 
"Best" means what to you, specifically?

Seems to me that, if you're using software RAID on your motherboard, the choice of drive is overwhelmed when considering performance.

I am looking for a drive with no issues in raid 5 and okey speed but that is not the most important and aroud the same price as the 2TB Samsung Ecogreen F4EG
 
i have 8 of the samsung hd204ui in a raid 6 (lsi 9260-8i) with no issues so far. just make sure you update the firmware on ALL of them unless samsung has release the drives with updated firmware.
i cant give any input on other drives as i dont have any of the other options
I have 6 of these in a raid 5 configuration as well with no issues. I am actually looking for more to setup a second server with the same setup.
 
Okey I have actually checked with the store where I am buying from and they can confirm that they are updated by checking the manufactured date :) and of course I mean hd204ui sorry for not writing that. Then I can maybe ask you if they are loud?

I would recommend you still check the firmware version just in case, font want it to bite you in the ass later.
well they shouldn't be too loud for starters as they are lower rpm than most drives. the sound of my case fans completely covers the small sound the hard drives make. sorry :(
 
Another vote for the 2TB Samsung F4's. Currently have 5 in raid 5 and getting 350mb/sec read/writes on an Areca 1230. Price/Performance ratio is great with these drives, even on an older raid card.
 
Without having done a rebuild, it's hard to trust your opinion. Odds are the array is too large to rebuild properly.
 
i tested out the rebuilding and OCE functions as well as migrating from raid5 to raid6 to see how it all worked with my 8xhd204ui. worked perfectly and was easy to do once you know what to do. i could post some benches later tonight once i get off work in a couple hours if you would like to see how they work with my 9260-8i
 
Six 2TB drives in a RAID5 array means that a rebuild will require a read of the whole set of data drives to compute parity. That's 12 TB of data, which is 96*10^12 bits. The spec sheet says the drive has a non-recoverable error rate of 1 in 10^15. This means the odds are considerably high that a read error is going to happen while rebuilding the array. It's not guaranteed to happen, but it's pretty close -- so it might be a good idea to think about RAID6.
 
Six 2TB drives in a RAID5 array means that a rebuild will require a read of the whole set of data drives to compute parity. That's 12 TB of data, which is 96*10^12 bits. The spec sheet says the drive has a non-recoverable error rate of 1 in 10^15. This means the odds are considerably high that a read error is going to happen while rebuilding the array. It's not guaranteed to happen, but it's pretty close -- so it might be a good idea to think about RAID6.

There is a mistake and a vague wording there.

When one drive in a six-drive RAID-5 fails, then FIVE drives must be read to rebuild the failed drive. That is 5 x 2e12 x 8 = 8e13 bits to read. Since the spec is (on average) 1 error per 1e15 bits, the probability of seeing an error during a rebuild would be approximately 8e13 / 1e15 = 0.08 or 8%. Or more accurately, if you did such a rebuild 100 times, the most likely outcome would be at least one error in 8 of the rebuilds. Alternatively, if we assume the spec sheet means to say that the bit error rate is 1e-15, then the probability of at least one error in 8e13 bits read is 1 - (1 - 1e-15) ^ 8e13 = 0.0768 = 7.7%.

Usually, 8% is not considered "high", although in this context I agree that 8% is higher than most people would like. I always say that any striped RAID with drives larger than 1TB should use RAID-6 or better.
 
Last edited:
There is a mistake and a vague wording there.

When one drive in a six-drive RAID-5 fails, then FIVE drives must be read to rebuild the failed drive. That is 5 x 2e12 x 8 = 8e13 bits to read.
The parity drive must be written, so there's 8.0 * 2e12 bit writes happening, as well. The spec sheet doesn't include a write error rate, but write errors do happen and it's not unreasonable to assume that they're about the same as the read error rate. (They're not the same, but without specific information, using the read error rate as an approximation is acceptable.)
 
BTW, the current thread is yet another reason I will not run RAID on anything but ZFS. To wit: any software or hardware RAID (e.g. linux md or others) has no knowledge of filesystem info, so rebuilding an array requires reading the entire set of N-1 drives, and writing to all the blocks on the replacement drive (unless there is something I am missing?) When ZFS is rebuilding an array (they call it resilvering), since the filesystem/volume-manager/raid-manager is one integrated entity, it needn't access any blocks that are not in use, so unless you routinely run with your pool/array almost full, this should drastically reduce rebuild time...
 
I have not bought the harddrives yet but I am still thinking of a raid 5 or 6 but one problem is that my motherboard do not support raid 6 :(

I think it would be better to have a raid 6 and safer with a raidcard, so can you guys help me with a raid card that is good, support r6 and does not cost to much please.
 
it needn't access any blocks that are not in use, so unless you routinely run with your pool/array almost full, this should drastically reduce rebuild time...

Should? :D

The rebuild times for ZFS I've seen quoted are usually long, certainly longer than most other conventional striped RAID reubild times. Sure, if your ZFS RAID is only 5% full it might be faster, but from what I've seen, if your ZFS is even half full it is probably going to be slower to rebuild than conventional RAID.
 
The parity drive must be written, so there's 8.0 * 2e12 bit writes happening, as well. The spec sheet doesn't include a write error rate, but write errors do happen and it's not unreasonable to assume that they're about the same as the read error rate. (They're not the same, but without specific information, using the read error rate as an approximation is acceptable.)

Are you sure that is the way it works? I thought that any write errors were included in the read error rate since when a sector is written, the ECC data is also written, and then when the sector is read back, if the ECC cannot correct it to match the CRC, it is counted as URE.
 
Are you sure that is the way it works? I thought that any write errors were included in the read error rate since when a sector is written, the ECC data is also written, and then when the sector is read back, if the ECC cannot correct it to match the CRC, it is counted as URE.

I've seen drive spec sheets that give separate rates for reads and writes, and I've seen only read rates. I'm not perfectly sure why some manufacturers do or don't show the rates. Spec sheets in the storage industry are pretty iffy -- it's often ahrd to get power consumption information, even.
 
I'm running 5 of the EcoGreen F4's in RAID 5 on the 9260-8i. I'm getting terribly slow write speeds, sub-100 MB/s. I realize that for writes all the XOR calculations need to be done, so speeds aren't going to be great, but this seems really slow. What are others getting for speeds? Also, what stripe size is everyone using? My array is currently at 256k. This is running on Server 2008 R2, primarily a media server for BR-Rips and client PC backups. Thanks.
 
I'm running 5 of the EcoGreen F4's in RAID 5 on the 9260-8i. I'm getting terribly slow write speeds, sub-100 MB/s. I realize that for writes all the XOR calculations need to be done, so speeds aren't going to be great, but this seems really slow. What are others getting for speeds? Also, what stripe size is everyone using? My array is currently at 256k. This is running on Server 2008 R2, primarily a media server for BR-Rips and client PC backups. Thanks.

I suggest you start a new thread, and also provide a lot more information about your setup, and some benchmarks, if you want to get useful replies.
 
Back
Top