Intel S3500 300GB performance

watooit

n00b
Joined
Oct 30, 2013
Messages
4
I have purchased a few Intel S3500 300GB SSDs.

The atto performance on the smaller read/write transfer sizes is very low (my 10 year old mechanical drive is about 16 times faster on 0.5 KB writes for example). Performance doesn't pick up until about 4MB transfer size.

The SSDs have the latest firmware and write caching is enabled. I have tried on both an Intel C220 SATA AHCI controller and an LSI 2308 controller - results are similar.

I am worried about these results as I imagine the performance for smaller writes/reads is very important for general OS and database activity.

Is this normal?

 
I actually work with these and I can tell you that the main reason you don't start seeing any noticeable performance until 4KB is because the controller they use is really highly tuned for 4KB. In OS usage windows boot times were within 1-2 seconds of just about every other SSD on the market that I've tested including 530, 520, 840pro, 840evo, vector, and vertex 4. The S3500 drives are very very stable and consistent in their performance and it takes a really long time of heavy usage to get them to a dirty state and they recover pretty quickly to a clean/steady state.

What was your intended usage for the S3500's, because they were actually designed with a read cache drive in mind. You could always do a drive clone from your current SSD to one of the 3500's to test performance, but really you shouldn't have any issues with them.
 
Last edited:
Thanks for your reply.

I was planning on putting four of them in a RAID 10 array on the LSI 2308 in IR mode. I know the 2308 does not have any cache but I figured the Intel SSDs would be fast enough without; and it saves the BBU hassles.

They will run SBS 2011 + a postgres database and serve about 20 users.
 
Here are the RAID10 results



Still worried that the lower speeds might reduce database performance. A single Samsung 840 pro outperforms this array at 4KB transfer size!
 
Remember that RAID will not increase IOPS for small transfer sizes significantly.
 
@omniscence, striping RAID can linearly increase IOps as queue depth grows, and for writes it requires less queue depth to scale. Of course, desktop appliations are limited in IOps and the SSD is much to fast for the system to really use all interleaving channels optimally.
 
Greetings

Performance doesn't pick up until about 4MB transfer size.

Don't you mean 4KB and not 4MB?

The atto performance on the smaller read/write transfer sizes is very low

How will this be relevant for your intended application? because if you read further........

I actually work on these in the lab and I can tell you that the main reason you don't start seeing any noticeable performance until 4KB is because the controller they use is really highly tuned for 4KB

Interesting information which is good to know and much appreciated......

They will run SBS 2011 + a postgres database and serve about 20 users.

So presumably you will format the NTFS volume and will you allow the cluster size to just default to 4KB or will you choose a larger one? e.g. 8KB, 16KB, 32KB and 64KB are available for NTFS. Postgres apparently has an 8KB block size so it would make sense to increase the cluster size to at least 8KB, however I'm not familiar with SBS so I don't know what block size it uses or even whether it can be changed.

What will the Postgres database be? If it's an OLTP one then increasing the block size even more might not be a good idea but on the other hand if it is a data warehouse where full table scans might be the most common action then perhaps increasing the block size even more might be better (and also increasing the cluster size to match it).

The wasted space in NTFS per file will be on average (cluster size / 2) so unless you have lots of small files (like millions of small JPG's) then increasing the cluster size is probably a good idea in its own right anyway.

The only downside (but only if you were considering using it) is that windows compression ceases to work if the cluster size exceeds a maximum of 4KB. Also if Postgres can use large block sizes whereas SBS only uses small sizes then I suggest seperate volumes for each one and formatted with different cluster sizes to suit as appropriate.

Cheers
 
Thanks for your reply.

Don't you mean 4KB and not 4MB?

Yes, sorry.

How will this be relevant for your intended application? because if you read further........

I am not sure how this will affect real world speeds for my intended application - I am just trying to understand why reading/writing a 0.5kb file on an ssd from 2013 is slower than on a 10 year old mechanical drive.

So presumably you will format the NTFS volume and will you allow the cluster size to just default to 4KB or will you choose a larger one? e.g. 8KB, 16KB, 32KB and 64KB are available for NTFS. Postgres apparently has an 8KB block size so it would make sense to increase the cluster size to at least 8KB, however I'm not familiar with SBS so I don't know what block size it uses or even whether it can be changed.

I was under the impression cluster size had little to no impact on performance, so I left it at default 4K. Just did some testing myself with different sizes and the results all came out similar.

What will the Postgres database be? If it's an OLTP one then increasing the block size even more might not be a good idea but on the other hand if it is a data warehouse where full table scans might be the most common action then perhaps increasing the block size even more might be better (and also increasing the cluster size to match it).

OLTP. Some preliminary testing showed pretty poor performance on our database. But, since I am also trying to switch from linux postgresql to windows postgresql, this could just be due to lack of configuration optimization or maybe postgresql just performs poorly on windows; so I can't blame the drives. Before I spend too much time on this though, I would like to make sure the drives perform properly.
 
Back
Top