WD 1500 HLFS [Benchmarks!] [Single Drive] [RAID 0]

knob

Limp Gawd
Joined
Feb 16, 2008
Messages
291
UPS brought me a small brown box this morning. It contained within it 2 WD 1500 HLFS VelociRaptors! I bought them a couple days ago from 'da Egg. (Aside: dang it! they're $10 cheaper each than when I bought them like 4 days ago... :mad: :cool:)

Aren't they sharp?


Raptor, Raptor, Raptor! One of them is a touch faster than the other two... :D


And, a picture of the 2.5" -> 3.5" form factor conversion for those interested in noting its newfound backplane compatibility.


One other note: both drives carry the suffix 01G6U0

Ok, now to set the stage... I have just installed Home Premium Vista x64 on 1 of the 2 VRaptor 150s. I will run all (single drive) benchmarks on the empty VRaptor.

After the clean install of x64, I installed my ICH9R system bus drivers from Intel, my SoundMAX drivers, also from Asus, my Marvell Yukon Eth drivers, NV 175.19 and (in a moment) Service Pack 1.

Ok, in the spirit of getting and giving information, if any of you would like me to run benchmarks other than the ones I'm going to run (because these 3 are basically mandatory), list 'em out (with a trust-worthy download link) and I'll try to give them a shot.

For your benchmarking pleasure, I'll offer the presumptory HDTach, HDTune and the newest Sandra. I'll try to run them on both a single drive, and the RAID 0 I'm going to setup once my single drive testing is complete. Oh, almost forgot IOMeter.

Also, if you'd like some other info on my system, other than the signature, or my installed apps details, shout it out.

Ok, first: Vista install took <12 minutes. Stopwatch started after I defined the disk and it began installing, up to the time where it requested my timezone information.

While I'm waiting on SP1 to download so I can install it, and to whet your appetite (and to allow pre- and post-SP1 comparisons), I'll start you off with some benchies.

*saving my work by starting the thread*

Ok, 32MB Long Bench on HD Tach...


32kb block, slider bias fully to 'accurate' HDTune...


Sandra, disk test, all options enabled, queue depth=4...
 
/\ /\ are those benchmarks in raid 0 or single drive?

thanks for the info, i gots two coming in today via fedex @5pm :D, can't wait to put dem hoes to work after putting a hole in my wallet :D
 
IOMeter, default access pattern, 1 to 128 queue depth, exponential, powers of 2...

Code:
Queue Depth          Total IOps
1-----------------------258.9
2-----------------------252.7
4-----------------------257.1
8-----------------------264.4
16----------------------264.7
32----------------------279.8
64----------------------253.2
128---------------------241.2
 
/\ /\ are those benchmarks in raid 0 or single drive?

thanks for the info, i gots two coming in today via fedex @5pm :D, can't wait to put dem hoes to work after putting a hole in my wallet :D

All benches so far are single drive only. Once I've satiated all our single-drive curiousities, I'll RAID and post RAID benchies.

Also, my wallet shares in your pain :)
 
Presumably this should give an estimate of the sustained sequential write speed (better ideas are welcome, of course)

File copy bench: Raptor_A (C: ) to Raptor_B (F: )... 1285 files, 355 folders, 8.94GB (i.e., a World of Warcraft BC installation, fully patched)

Code:
MBytes Copied ----- Seconds ----- Rate
9166.8 MB --------- 112 sec ----- 81.8MBps

A note: the Vista file transfer box reported a rate about 10MBps quicker.

Also, there is no appreciable difference between pre- and post- Service Pack 1 disk performance.
 
Ok now please make 2 folders. One containing large files (big video file) and one containing lots of small files(<200kb in size, pictures, system files).

Copy them one by one to the empty VR150 and time the durations one by one. Then divide the folder size with the duration in seconds to get its real life Write performance.
After that copy the same folder(s) on the drive itself to get it's R/W performance for small and large files.
Make a small table in excel to show read, R/W speed for small files and large files!
RAID0 R/W performance should be VERY interesting :)

If you have the time do it also with your older hdd instead of the empty VR150 so we can all see how much faster it he new VR in real life comparisons.

Thank you for all your trouble!
:)
 
just got my two HLFS's in today from newegg, thank god they pack these hard separately in they own bubble wraps.so far my benches bee close to yours too :D

these raptors are damn sexy!
gettin 2 GLFS's n 2 HLFS's has my gf wondering why she ain't got no spending moneys:D

All benches so far are single drive only. Once I've satiated all our single-drive curiousities, I'll RAID and post RAID benchies.

Also, my wallet shares in your pain :)
 
Ok now please make 2 folders. One containing large files (big video file) and one containing lots of small files(<200kb in size, pictures, system files).

Copy them one by one to the empty VR150 and time the durations one by one. Then divide the folder size with the duration in seconds to get its real life Write performance.
After that copy the same folder(s) on the drive itself to get it's R/W performance for small and large files.
Make a small table in excel to show read, R/W speed for small files and large files!
RAID0 R/W performance should be VERY interesting :)

If you have the time do it also with your older hdd instead of the empty VR150 so we can all see how much faster it he new VR in real life comparisons.

Thank you for all your trouble!
:)

Ok, will do. Would you like it broken down just by large and small files? I'll be quite interested, as you are, to see how this compares my old WD360GD to my new WD1500HLFS.

I also might try coding up a little "C" write benchmark myself, if I run out of other things to do :)
 
just got my two HLFS's in today from newegg, thank god they pack these hard separately in they own bubble wraps.so far my benches bee close to yours too :D

these raptors are damn sexy!
gettin 2 GLFS's n 2 HLFS's has my gf wondering why she ain't got no spending moneys:D

Glad your benches are looking like mine - means we both have functioning drives.

Also, about your lady friend not having any spending money.... sometimes sacrifices just need to be made ;):cool:
 
I'm very impressed with these performance numbers can't wait to get my single drive on Tuesday. :cool:
 
Ok, by request... the large/small file write tests...

Side note: no more SP1

Small files: 498 jpegs in 6 folders totaling 583MB on disk (1.1MB/file)
Large file: common.mpq (main data cache for WoW) @ 3.51GB on disk

Code:
Small file transfer

Source  ----- Target  ----- time (s) ----- MBps
Raptor A ----> Raptor B ----- 11 s ----- 53MBps
Raptor B ----> Raptor B ----- 15 s ----- 39MBps
Raptor A ----> WD360GD ------ 24 s ----- 24Mbps
WD360GD  ----> WD360GD  ----- 26 s ----- 22MBps


Large file transfer
Raptor A ----> Raptor B ----- 48 s ----- 74.9 MBps
Raptor B ----> Raptor B ----- 94 s ----- 38.2 MBps
Raptor A ----> WD360GD -----  60 s ----- 59.9 Mbps*
WD360GD  ----> WD360GD  ----- 120 s ----- 30 MBps

*I know from using that old drive that the WD360GD drives had a reliable max STR of 60MBps, which lends a lot of validity to these tests.

I've got a little max write program coded up... just need to get the VC runtime onto the vista install, else my executable won't run.
 
Ok, sustained write application...

I have 1 gripe with my own code: precision only down to the whole second :( (bigger data sizes make it better, though)

I have tried it up to writing a 32GB file :cool:

Here's my code, so others can check it out...

Code:
#include <fstream>
#include <iostream>
#include <ctime>

using namespace std;

void main()
{
	int stride = 1024;

	char data[1024];

	for (int i=0;i<stride;i++)
	{
		int temp=i%6;

		switch (temp) //just switch up the char written
		{
		case (0): data[i]='a';
			break;
		case (1): data[i]='b';
			break;
		case (2): data[i]='c';
			break;
		case (3): data[i]='d';
			break;
		case (4): data[i]='e';
			break;
		case (5): data[i]='f';
			break;
		}
	}

	int passes= 1024*1024; //total data written = stride size in MB
	int counter=0;
	ofstream h;

	h.open("test.dat");

	time_t t1,t2;
	t1 = time('\0');

	while (counter<passes)
	{
		h.write(data,stride);
		h.flush(); //comment out to use Windows buffering scheme, else it will write exactly stride bytes each pass
		counter++;
	}
	
	t2=time('\0');

	cout<<"Wrote "<<(long)passes*stride<<" bytes to the drive."<<endl;
	cout<<"End time: "<<t2<<endl;
	cout<<"Begin time: "<<t1<<endl;
	cout<<"Net time:"<<t2-t1<<endl;
	cout<<"MBytes/sec:"<<(float)stride/(t2-t1)<<endl;
}


Results...

With Windows buffering ON, 1024 MB file written ... 146 MBps (a surprisingly effective buffering scheme is at work here)
With Windows buffering OFF, 1024 MB file written, 1024byte stride size (i.e., 2 physical blocks at a time).... 128 MBps (also approximately the max sustained read rate)

So, based on the other tests, and my rough write estimating application... the drive doesn't appear to favor reading much at all... a very balanced drive.

PS: An extra puzzle for those interested... what small thing did I forget to do in my code? :D
 
RAID 0 installed - SP1 installed. 64k strip size on the ICH9R RAID.

HDTach, 32MB long bench...


HDTune, 128k block size (2x strip size), slider biased to accurate...




Read-rewrite test: 8.94GB (WoW) copied and pasted on the same drive...

Copy-rate: 8.94GB / 110 s = 83.2MBps
 
Ok, will do. Would you like it broken down just by large and small files?

I think that would do. Just so we can see how fast it copies smaller ~200kb files and large 1GB+ video files.

Thanks!
:)
 
It's a fast damned drive, what more info do you need to realize this? Good lord... ;)
 
Got any new results?
I'm curious to know...
:)

Apologies for the delay... power outage, weekend trip, assorted delays (each deserving at least one hearty 'gah' :cool:).

I'll finish out your requests for file copy tests as soon as I can get to it... hopefully this evening.

In the mean time, having read this (http://www.pcper.com/article.php?aid=618&type=expert&pid=2), I'll defer my IOMeter tests to those guys, as their tests actually scaled up with 'load', where mine were static (yay for my being so impatient with IOMeter)...

Anyway, quite worthwhile to compare the tests I had done with my HLFS drives to their GLFS drives. I was glad to see I didn't sacrifice any speed at all in going for the half-size drive.
 
hey knob hows it going? your raid0 speeds are faster than mine and we are running same 128KB strip size, i have dual 300gb models though and ICH10R instead of 9R, how can i make my drives as fast as yours? should i try 64KB strip or less?
 
128k stripe is best on intel matrix raid

Where did you pull that out of? Stripe sizes come in to play on what the volume is being used for (what types of wites n reads, file sizes etc.) It has nothing to do with the controller.
 
Back
Top