That would explain things then, with that stripe size. You're taxing it with a lot of calculations, plus the drives are writing things out in 16KB chunks so their speed will be lower (look at drive benchmarks for 16KB writes vs. 1MB or larger writes), so in this case you're being affected by spindle speed and possibly maxing out the processor.
I wrote to Adaptec support and that's what they've told me as well.
They recommended a stripe size of 256KB (this is what the controllers were optimized for).
They said their engineers worked hard to make the default values work as good as possible, so I should revert the controller's settings to default values (I know I did some changes as well). I will try also create another array using 256KB stripe size, and see what are my results in terms of performance, and I will report back. I should probably see an increase in performance.
How does affect the stripe size in terms of space used for small files?
For example if I'm writing a 8KB file, will it use the whole 256KB stripe, or some other files will be "squeezed" on the same stripe as well? How does this work?
The way I understood, it works like this:
Assuming I have a Raid5 array with 4 drives, 3 of them are used for data, and one for parity (actually the data and parity stripes are sitting on all 4 drivers, because they are rotating). For a file that is 768KB let's say.. it is divided in 3 chunks of 256KB and written on the 3 drives, then the 4th drive contains a sum of the 3 chunks written on the other 3 drives. For files larger than 768KB, they will be split in parts of 768KB each, and this will repeat. However, I wonder what happens with the remainder of the data (not all the files are dividing exactly at 768KB)? How is that data processed? Will it occupy a whole chunk of 256KB or that chunk will be "shared" with other files? If that's the case, I see no point in creating a small stripe for the array.. just create the largest one. So I assume the extra space is lost..
Anyone knows exactly how this works?