A Whole Bunch of Solid State Disk Testing

Status
Not open for further replies.

JasonSTL739

Weaksauce
Joined
May 5, 2008
Messages
72
Tomorrow I'm doing some testing with the hardware below - wanted to start this thread early in case anyone in this community would like to suggest certain tests.

It is just for fun, mainly because of access to them all at once.

I'm testing scenarios with Photoshop CS4 and Lightroom under Windows 7, comparing various disk subsystem options (including running all of it in memory as a VM (ALL, including the OS). However, I also plan to run ATTO, Crystal, and possibly IOMeter/SQLIO for fun.

Disk controller options:
Areca 1680xi-24 w/4GB Cache (SAS/SATA)
Areca 1220 w/256MB Cache (SATA)
Intel Onboard ICH10R (SATA)

System Options:
Overclock Intel i7 920 @ 3.6Ghz w/12GB RAM
Dell R905 Quad-core 4-way w/64GB (cannot run the Areca 1680 unfort)
Dell 2950 Quad-core 2-way w/16GB (can use Areca 1680)

Traditional Disk Options:
2X SATA WD Raptor 300GB 10000RPM
2X SATA WD 500GB 7200RPM
4X SATA Seagate 1.5TB 7200RPM
8X SAS 73GB 10k
Any single-disk versions of the above

Solid State Disk Options:
6X Intel X25-E 64GB
6X Intel X25-E 32GB
4X Intel X25-M 80GB
4X OCZ Vertex 30GB (firmware 1370)
1X OCZ Vertex 120GB (firmware 1370)

Tests planned include:
-ATTO
-Crystal
-IOMeter
-SQLIO

Photoshop/Lightroom testing (first guesses at some options):
-import of images into lightroom
-Creation of previews
-Some "actions" moving around inside lightroom
-Export from lightroom
-Editing an image (creating a .tif) to open an image in photoshop
-A few actions inside photoshop
-Saving the file after a set series of actions.
 
Nice! Can't wait to see your info. I have capped out my areca 1220 at 430mb/s, please get the results up soon so I can compare :D.
 
Nice! Can't wait to see your info. I have capped out my areca 1220 at 430mb/s, please get the results up soon so I can compare :D.

Def - I'm curious also. The Areca I've had for a long while - so the 1680 should be interesting as I expect it to not cap out until 1GB/sec. I'll find the limits of the card though - 12X X25-E's should make it cry (hopefully)

I'm also going to do a comparison with the Vertex firmware for fun - 0112/1199/1275/1370 in all the benches.

Doing that part tonight I think. Weeeeeeee. LOL.
 
I read a rumor that these SSD is fast when the drive is empty. If the same drive is half full to near full, the speed drop by at least half. Is that true?
 
I read a rumor that these SSD is fast when the drive is empty. If the same drive is half full to near full, the speed drop by at least half. Is that true?

It isn't that simple unfortunately.

It is really based on "free" block on the SSD. If the SSD isn't very full and is new, often there are many blocks that are "free" (new, don't need to be erased), so it is faster.

As you age an SSD and have used much of the SSD, it will drop in speed for writes.

TRIM support will help to fix this, but and a decent firmware and cache can address it as well to where real-world performance is pretty much uneffected. Windows is small read and write happy. Even today, the "cheap" SSD's like the Vertexs still rock it out hard over any of the spinning disks based on my personal usage. Hell I'm running 4X 30GB Vertex's in my workstation instead of the 7X 73GB 15k SAS disks that were in there....
 
Tests being run now include:
-ATTO
-CrystalMark
-HD Tach Read
-HD Tach Writes

Debating:
-IOMeter with a few custom tests for I/O

I'm testing various flavors of Vertex firmwares on a 120GBer now with all four firmwares.
 
I would be interested in seeing any results you can provide on CPU load under high IOPS conditions.
 
I would be interested in seeing any results you can provide on CPU load under high IOPS conditions.

Can do. The "real reason" I have all these SSD is directly related to high IOPS testing around SQL 2008. Comparing the SSD's to traditional DAS and SAN technologies.

12 or more on the 1680 should give interesting results. I'm also going to see what happens with 12 X25-E's on the 1680, 4 X25-M's on the Areca 1220, and 4 Vertex 30GBers on the ICH10R all at once perhaps with software RAID0. We'll see what breaks, it is just for the hell of it.
 
I am interested for the same reason. My experience to date is that I'm starting to bottleneck on individual server OS being able to issue enough interrupts to service so many IOs. I am seeing this manifest as a spike in CPU kernel time once IOPS hit a certain limit on RHEL and Windows. However most of my testing uses a software DMP layer with SSD on a SAN connected via fiber channel HBA. I am wondering whether a performance raid controller and a more "direct attached" approach might help.
 
I am interested for the same reason. My experience to date is that I'm starting to bottleneck on individual server OS being able to issue enough interrupts to service so many IOs. I am seeing this manifest as a spike in CPU kernel time once IOPS hit a certain limit on RHEL and Windows. However most of my testing uses a software DMP layer with SSD on a SAN connected via fiber channel HBA. I am wondering whether a performance raid controller and a more "direct attached" approach might help.

If you don't mind my asking, in a RDBMS application that needs the sort of scalability you seem to be trying to architect, clustering for HA is often a requirement. How are you reconciling such a need vs. putting a performance RAID controller inside a single box, or does it not apply?
 
It isn't that simple unfortunately.

It is really based on "free" block on the SSD. If the SSD isn't very full and is new, often there are many blocks that are "free" (new, don't need to be erased), so it is faster.

As you age an SSD and have used much of the SSD, it will drop in speed for writes.
.

I don't like the sound of that. It sounded that you are saying the technology is not there yet. I don't want to lose data

I hate hard drive because the head can crash. Now we have something faster but data will eventually be lost? Anyhoo, why is it that so many brand advertise:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820233075

MTBF 1M hr. = over 100 years
100+ Year Life Expectancy (MTBF) OEM qualified Samsung controller
 
Do some file copy tests. Make a big folder with <1MB files and copy it on the disk itself, time it, and give us the MB/s score for each setup. Do the same using a large DVD .iso or .mkv file and write down the score

Also time the installs of all those apps... It should also give us a real good look on everyday usage benefits!!

Thanks!
 
I don't like the sound of that. It sounded that you are saying the technology is not there yet. I don't want to lose data

I hate hard drive because the head can crash. Now we have something faster but data will eventually be lost? Anyhoo, why is it that so many brand advertise:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820233075

MTBF 1M hr. = over 100 years
100+ Year Life Expectancy (MTBF) OEM qualified Samsung controller

Data loss on SSD is more of a graceful failure, but it does exist. You'd know before you lost all your shit - plus even if it failed you could read the data from the drive just not write to it (in theory)

That MTBF sounds like SLC... MLC which most of the drives consumers buy are no where near that metric.
 
[LYL]Homer;1033955460 said:
In case you missed it this article was a big hit - it explains this issue among a lot of other things.

http://anandtech.com/storage/showdoc.aspx?i=3531

While personally I'm not put off by the gradual slowdown of SSDs, I can understand the hesitation. In the benchmarks we've looked at today, for the most part these drives perform better than the fastest hard drives even when the SSDs are well worn. But with support for TRIM hopefully arriving close to the release of Windows 7, it may be very tempting to wait. Given that the technology is still very new, the next few revisions to drives and controllers should hold tremendous improvements.

Drives will get better and although we're still looking at SSDs in their infancy, as a boot/application drive I still believe it's the single best upgrade you can do to your machine today. I've moved all of my testbeds to SSDs as well as my personal desktop. At least now we have two options to choose from: the X25-M and the Vertex.

thank you for that article, I haven't read anything that long since I read the Fed. reserve testimony to congress.

The bottom line is sum up from the above article. I was thinking of a SSD drive because of speed, and the fact that it won't crash.

In no way would I pay more money for slower performance thru time. But more importantly, the technology is fault: 10K write? You're kidding. For a storage technology max out at 10K write, I won't use it even if you pay me.

Thank you again. You just save me $800 this christmas
 
thank you for that article, I haven't read anything that long since I read the Fed. reserve testimony to congress.

The bottom line is sum up from the above article. I was thinking of a SSD drive because of speed, and the fact that it won't crash.

In no way would I pay more money for slower performance thru time. But more importantly, the technology is fault: 10K write? You're kidding. For a storage technology max out at 10K write, I won't use it even if you pay me.

Thank you again. You just save me $800 this christmas

Even with the 10k write issue, an SSD should last longer than a standard hard drive in normal use. It isn't that simple - it isn't going to wear out in 6 months. There is a point it will arrive at where it is "slower" than a new drive, but it should not degrade further from there.

BTW: even a "fully worn" SSD (Vertex or X25) is faster than SAS, Raptors, etc etc drive-for-drive.
 
OMG, I am getting so sick of hearing about this, but here is my 2 pennies on the matter.

Yes, there are a few instances where SSD's are slower than hard drives. But in 80% of the time, SSDs far exceed the performance of hard drives.

Why would you limit yourself to be faster 20% of the time when you could be faster 80% of the time? The argument makes no sense.

Yes, the write speed on SSDs is not as fast as the read speed. But again, that is one of those 80/20 arguments. In most systems, you spend most of your time reading, so the SSD will be faster 80% of the time.

I have been using SSDs in my 2 main systems for a couple of months now, and I will NEVER go back. The ONLY knock I have on my setups is that they are somewhat expensive. But since I have a bit of money for once in my life, that is NOT an issue for me.

My desktop systems are so fast, that I could not stand to use my work laptop any more. So I stuck a 60 gig OCZ Vertex drive in there, and will never go back. On a single core laptop, Windows loads in 10 seconds, and from the desktop I can be in Lotus Notes in less than 5 seconds. Not that I want to be in Notes, but I have to be for work.

And some of the newer models like the Intel Ms and the OCZ Vertex are even better than the previous generation of drives.

It boggles the mind.

Don
 
Last edited:
Jason, did all the naysayers come to your house and break your equipment? Lets see some resulits! :D
 
OMG, I am getting so sick of hearing about this, but here is my 2 pennies on the matter.

Yes, there are a few instances where SSD's are slower than hard drives. But in 80% of the time, SSDs far exceed the performance of hard drives.

Why would you limit yourself to be faster 20% of the time when you could be faster 80% of the time? The argument makes no sense.

Yes, the write speed on SSDs is not as fast as the read speed. But again, that is one of those 80/20 arguments. In most systems, you spend most of your time reading, so the SSD will be faster 80% of the time.

I have been using SSDs in my 2 main systems for a couple of months now, and I will NEVER go back. The ONLY knock I have on my setups is that they are somewhat expensive. But since I have a bit of money for once in my life, that is NOT an issue for me.

My desktop systems are so fast, that I could not stand to use my work laptop any more. So I stuck a 60 gig OCZ Vertex drive in there, and will never go back. On a single core laptop, Windows loads in 10 seconds, and from the desktop I can be in Lotus Notes in less than 5 seconds. Not that I want to be in Notes, but I have to be for work.

And some of the newer models like the Intel Ms and the OCZ Vertex are even better than the previous generation of drives.

It boggles the mind.

Don



Don, you and me both. I am trying to figure out what is really going on with these people who complain about the industry. My only conclusions are that they have never tried SSD yet complain about it or can't afford it so spread rumors and attack the industry.

What kills me is when I hear the comment on price/per gig. Yes, it may be high. But like any other component in a pc you pay a premium for the newest and fastest! I have nothing against traditional drives, I use fast SSD's as main drives and load up low price/gig drives for storage. Easy, people need to take their whining somewhere else, IMO.
 
I read a rumor that these SSD is fast when the drive is empty. If the same drive is half full to near full, the speed drop by at least half. Is that true?
sort of...
once the drives been fully written up (doesnt have to be full or close to full for this to happen) performance will drop. i think it's around 15% for the X25-M and 10% for the X25-E. performance will not continually degrade, however.it's 100% full speed before being fully written and then immediately after it's -10 or 15% for the remainder of its life. there are tools and methods of completely clearing the SSDs and "resetting" them that obviously involves losing all data on them (not as simple as a quick format, though).

the high xfer rates of SSDs is, or can be, very nice, however the real benefits of SSDs are the RANDOM write/read speeds and the <1ms random latencies. this is a very true test of real-world perofrmance. small size (4kb) random R/Ws probably account for the majority of HDD tasks and make programs open instantly. no mechanical setup in the world can even come close to the random speeds and latencies of the decent SSDs. even 5 RAID 0'ed 10k Velociraptors are about 10x slower, literally, at best than the X25-M in real-world perofrmance. even the slower SSDs (besides JMicron-controlled drives) are magnitudes better for real-world performance than 10k VRs.

using benchmarks to show realw-orld performance absolutely MUST include 4KB random R/W, 4kb IOps, and random R/W latencies.
 
Jason, did all the naysayers come to your house and break your equipment? Lets see some resulits! :D

Heh - nope. Been struggling with the 1680 - not getting anything over 900MB/sec from it. Was expecting more.

I did get 30,000+ IOPS from the SSD's in IOMeter with a 70/30% split. Even 22,000 with it at 100% random 4k. :D

I'll at least post a spreadsheet with the Crystal and HDTune Random results shortly, which is all of the drives on the ICH10R.
 
Here is some data.

This is entirely on the ICH10R, pretty much any drive I could hang off of it. 1X, 2X, 3X, etc. Without question, it is limited to around 600MB/sec

-Intel Matric Cache was always off. It DEFINITELY has a huge effect on writes especially, but for the purposes of stable testing I didn't mess with it.

-Always 128k strip on RAID.
-Aligned to 128 if it was an SSD.

I killed the ICH10R on one of the two i7 board I have before I realized the northbridge/southbridge pipe assembly needed additional cooling. The numbers in this test are a second run after I actively cooled the chipset. I was getting some STRANGE numbers!

Spreadsheet only:
http://www.sedura.com/linking/benching/ICH10Rresults.xlsx

Zip file with spreadsheet and all screenshots (8MB)
http://www.sedura.com/linking/benching/ICH10Rresults.zip
 
These kind of benches are pretty worthless for this type of disk subsystem, but here it goes. I'll be playing extensively with IOMeter, SQLIO, and Benchmark Factory with the 1680 and the X25s in weeks to come to show and tell.

All X-25-E's (12) on Areca 1680.

510140477_MjNMc-M.jpg


510140480_AVtqa-M.jpg



5 X25-E's on the ICH10R:
510140243_9KJ2U-M.jpg


510140253_rBfZw-M.jpg
 
For those that want to know about photoshop, here is a better test than what I have hardware for:
http://download.intel.com/design/flash/nand/extreme/Photoshop_CS4_Performance_Comparison.pdf

Too many variables on my side - I decided not to waste the time.

Photoshop and Lightroom's differences with SSD are night and day in use - I will not be going back, EVER. I'm selling off all the SATA disks except for the 1.5TB drives pronto.

Especially in a laptop! Wow what a difference in actual use.
 
Last edited:
Here is another interesting exercise. Vertex firmware revisions - all four of them - on a 120GBer. Firmware *matters* on SSD even moreso than HD's.

Look at the difference at the firmware (most of the reviews out there are based on it) versus the newest!!!!

Original 0112:
509676146_AhCKG-M.jpg


1199:
509676159_YBz73-M.jpg


1275:
509676173_xpQco-M.jpg


1370:
509676187_9deur-M.jpg
 
Last edited:
Sweet. Thanks for sharing the results of all of your hard work.

I'm gonna have to go digging to see if there are any firmware updates available for my other SSD's.

Don
 
Sweet. Thanks for sharing the results of all of your hard work.

I'm gonna have to go digging to see if there are any firmware updates available for my other SSD's.

Don

No prob. Not done yet, but have to head out town tomorrow so I needed to put the now lone workstation that I didn't kill back together for the wife to use!

Plan to test extensively on the Areca 1680 next
 
Last edited:
This is fantastic information - thank you so much for collecting it.

On your 512B and 4K HD Tune tests on the Areca, hitting around 30K IOPS, did you catch what CPU load was? Also, what operating system did you use?
 
Also, in the posted spreadsheet the Areca results are listed as 8x SAS disk... confused.
 
Also, in the posted spreadsheet the Areca results are listed as 8x SAS disk... confused.

Yeah - I left that in there. There is a column for the controller that was in use. The results for that are worthless based on additional testing in IOMeter. Just ignore it.
 
This is fantastic information - thank you so much for collecting it.

On your 512B and 4K HD Tune tests on the Areca, hitting around 30K IOPS, did you catch what CPU load was? Also, what operating system did you use?

Thanks!

Vista x64 with as much disabled as possible (indexing, fetches, etc) on an Asus P6T6 WS Revolution Motherboard.

In HD Tune, it was so fast it hardly touched CPU - always under 5%.

I did also play in IOMeter with a REALLY nasty test. (70/30%, high delayed 3MB bursts, all random, etc) against the 12 SSDs. It shows CPU, ran for 10 minutes:
510193486_y8uw6-M.jpg


And here is another, but 100% random non-bursted 4k 70% Read 30% Write.
510193469_wfuRY-M.jpg


I can't do it until next weekend, but I'm happy to pound at one of these two with whatever I/O Meter you'd like to see in the Del 2950 Dual Quad.
-Areca 1680 with up to 16 SAS disks
-Areca 1680 with up to 12 X25-E SSD's
 
Even with the 10k write issue, an SSD should last longer than a standard hard drive in normal use. It isn't that simple - it isn't going to wear out in 6 months. There is a point it will arrive at where it is "slower" than a new drive, but it should not degrade further from there.

BTW: even a "fully worn" SSD (Vertex or X25) is faster than SAS, Raptors, etc etc drive-for-drive.

I agree w/ your 2nd pt. But what's 10K write? I downloaded quite a bit daily, and then I backup my download and delete the download in my hard drive. How long does a SSD drive has if I download daily?

Of course, 10K write is meant on the same spot. But I have no way of knowing which spot has 10K write or not.

Another problem w/ SSD is, it seems from that article, you need to format the drive to really clear out the free block. But formatting the drive wear out the SSD more so.

Further, that patch, (I forgot the name) which fix the problem of slower SSD thru time, is available in Win 7 only. I am very happy w/ XP Pro. I have no intention to buy Win 7 to fix this problem
 
SSD and flash drives in general have wear leveling built into the controller. Basically, no block is written to second time till all have been written to the first time.

So basically, you can write to the drive 10000 times the original size in data before it will start to wear out.

So my 30 gig drives should last for about 300 terabytes of data written to each one, and I have 4 in the Raid.

Don
 
That doesn't make any sense.

Say your drive is 90% full. And it's always 90% full. And you only use backup and transfer data of the remaining 5 to 10%. Then surely SSD has no choice but to write to the remaining up to 10K times, as the other 90% is full.
 
That doesn't make any sense.

Say your drive is 90% full. And it's always 90% full. And you only use backup and transfer data of the remaining 5 to 10%. Then surely SSD has no choice but to write to the remaining up to 10K times, as the other 90% is full.

Intel's method is to physically move the data on your drive time to time. This of course increases overall writes to disc, but reduces wear on the individual cells on the drive.

The alternative is to just buy a SLC drive, but of course that's prohibitively expensive for most.
 
Pertaining to the 10K write limit issue and the slow down issue, is there any insight that the next generation's SSD will fix these 2 problems? If so, how long?
 
Status
Not open for further replies.
Back
Top