Ceph Disk Controller Throughput Article

Nite_Hawk

Weaksauce
Joined
Nov 3, 2005
Messages
69
Hi Guys,

Just wanted to pass along an article I wrote that looks at how different disk controllers perform in a couple of different configurations with Ceph. The benchmark is just sending objects directly to the OSDs, so these tests are a bit lower level than something like FIO. I've got a ton of profiling and blktrace data to analyse and more tests to run, so hopefully I should have some deeper articles coming out in the future. Let me know if I screwed anything up or should have done anything different. :)

http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/

Thanks,
Mark
 
Interesting tests. Did you gather any information on latency while the RADOS instances were hitting the server? What stripe size did you use on the array for the RAID 0 tests?

Do you plan to do tests with all 36 disks in one Ceph cluster? How about benchmarking one OSD versus several using the same number of disks?
 
Interesting tests. Did you gather any information on latency while the RADOS instances were hitting the server? What stripe size did you use on the array for the RAID 0 tests?

Do you plan to do tests with all 36 disks in one Ceph cluster? How about benchmarking one OSD versus several using the same number of disks?

Hi!

I do have latency information provided by RADOS bench. Perhaps I'll do another quick article looking at that. I didn't gather latency information for each operation since it can have a performance impact. You can do that if you include high enough debug levels in the logs.

Raid stripe size was 64k. Next time I should probably pay more attention to making sure that max_sectors_kb is setup to try to avoid read-modify-writes. At this point I don't think it matters much though until we get the per-osd throughput up when using high performance block devices behind them.

Yes, I do plan to test a large disk configuration. It might have to be 32 since I accidently only bought 4 SAS9207s (and the system isn't too happy using the onboard SAS2208 controller with 4 SAS2308s installed). I've already got some test results with 24 OSDs and 8 SSDs for journals... :)
 
Back
Top