Stress testing new hard drive

gregster

n00b
Joined
Dec 16, 2009
Messages
24
I'm getting Seagate 2TB LPs for my new build. Given both mine and others mixed experiences with other Seagate drives, I thought it would be a wise idea to stress test them. While I don't think it will diagnose a drive that would fail a few weeks down the road, it should weed out the ones that are flaky out of the box.

Would something like 24-36 hours of Bart's Stuff Test, followed by a scan with HD Tune be appropriate? Of course I would be verifying the SMART data to make sure there aren't any reallocated sectors.
 
I have 6 of those drives. I stress tested them for a few days with WinThrax and Bart's Stuff individually first. Then put them in an array and stress tested the whole array with the same tools for a few days as well. Then connected them back individually to check the SMART values (my HP array controller doesn't show individual drive SMART data, at least not that I know of).

This all over the period of a couple of weeks with lots of benchmarking, array expanding/resizing tests reboots, starts and stops as well.

The thing I noticed about these Seagate drives (and the problematic 1.5TB 7200rpm drives as well) is that they rack up an enormous amount of "Raw Read Error Rate" and "Seek Error Rate" SMART errors where Samsung and WD drives have 0 of these errors.
 
Did you have any reallocated sectors or failures with your drives?

The reason why those counts are so high is because Seagate uses a different approach to them. I believe if your value for those is say, 50000, that means that you haven't had any errors in 50000 reads/seeks.
 
Did you have any reallocated sectors or failures with your drives?

The reason why those counts are so high is because Seagate uses a different approach to them. I believe if your value for those is say, 50000, that means that you haven't had any errors in 50000 reads/seeks.

Not yet on the 2TB LP models. They started 'clicking' when I had to put 4 of them up-side-down because of a cabling challenge but when I put them on their side all was well again (I don't have a case yet for my project so I have to resort to ghetto rigging for now). No other errors or relocated sectors.

The three 1.5TB 7200rpm Seagate's I've used the last 6 months or so (not in RAID) all started clicking after a few weeks/months and all had 20+ relocated sectors.
 
I would run the Seatools diagnostics, and I usually run Spinrite 6.0 on my new hard drives, although it can take some time...
 
Good luck!

I'm in the process of migrating a raid1 array from 7200.11's to F3's. The 7200.11's ran fine for months after purchase. Then the problems started. For whatever reason they would freeze up and the sata link would go down and stay down for minutes before coming back up. The result was a locked machine and when the machine became unlocked, a disk kicked out of the raid.

One of the disks finally locked hard with the well known BSY error. It was bricked. I RMA'd it. It came back with the new firmware. I did the firmware upgrade on the other disk. Both ran fine for a while before exhibiting the same old problems. I RMA'd one of the drives for the hell of it. The replacement worked fine for a while before exhibiting the same problems. Yesterday that same RMA replacement locked hard. It's no longer recognizable even after cold boots and many hours outside of a machine. This has got to be the biggest disk debacle in history.

The crazy thing is that there are still plenty of people seeing this problem even after the firmware "fix". Check out the Seagate forums. Meanwhile Seagate will not acknowledge that there are still problems with the 7200.11's. If you RMA you get the same exact model back with the same exact problems.

In trying to diagnose these issues I replaced sata cables, motherboard, and psu. What a pain in the ass. I'm not touching Seagate disks until they clean up their act. I need to see 3-4 generations of trouble free disks from them before I will take the risk.

I want to reiterate that these disks worked trouble free for the first couple months so good luck with the break-in testing.
 
I tested all 4 2TB drives with one run of Bart's test, and they did not accumulate any reallocated sectors, which is reassuring.

I'm ready to create the new array, but I wonder what stripe size I should use for RAID5? My uses will typically involve larger files (bare minimum of 20MB, typically much larger) without any small file use such as a database. Given 64K, 128K and 256K options, I would be inclined to go with 256KB. Does that sound logical?
 
I'm wondering the same thing re: stress testing my array. I've been running BST5 for about a total of 72 hours on the RAID5 array, the drives didn't drop out, so far my controller (RR4320) reports 0 bad sectors found & repaired. Is 72 hours of testing enough? I've run two threads on the server while connecting to the drive from various clients running BST as well at random points.

As for the RAID5 stripe, I also am using my RAID5 array for larger files and used 128K stripe size as it seems to be the optimal (and recommended size) for my controller.
 
I tested all 4 2TB drives with one run of Bart's test, and they did not accumulate any reallocated sectors, which is reassuring.

I'm ready to create the new array, but I wonder what stripe size I should use for RAID5? My uses will typically involve larger files (bare minimum of 20MB, typically much larger) without any small file use such as a database. Given 64K, 128K and 256K options, I would be inclined to go with 256KB. Does that sound logical?

The higher the better for large files. If 256K is the limit then I would do 256K.
 
Back
Top