coo-coo-clocker
Limp Gawd
- Joined
- Sep 6, 2007
- Messages
- 183
With the bandwidth of dedicated SAS/SATA controller cards and the latest generation of drives climbing up and up, I began wondering where are the pinch points in the data path?
I guess to start, I'd need to consider the mobo chipset. These days, I'd probably want something with intel P45 or better to support PCIe 2.0. This gives a theoretical bandwidth of 500 MB/s per lane. Bandwidth from the NB to the CPU is 10.6 GB/s - not likely to be a bottleneck anytime soon!
Then, there's the issue of the motherboard layout and how many PCIe slots. Theoretical maximum bandwidth for an 8-lane PCIe 2.0 slot would be 4GB/s. The nicer dedicated HDD controllers these days want PCIe x8 slots, right?
Next is the controller card itself. The latest PCIe cards such as the Areca arc-1280 claim sustained RAID-5 read throughput of 811 MB/s. But this was in an 'internal testing' environment. Probably more conservative to assume somewhere south of that number as our pinch point, say 750 MB/s.
Down now to the drives and the choice of interface. Both SAS and SATA 3.0 Gb/s are two of the higher bandwidth interfaces, right? I'm not looking at FC - I'll leave that for the enterprise storage guys! Both of these are looking to double their bandwidth sometime in 2009 (or so say most of the documents I'm seeing), but for now 3.0 Gbs or about 375 MB/s.
We can have multiple drives connecting to these controllers, so it seems that if each SAS or SATA 3.0 Gbs drive could reach its theoretical max of the connection, it would only take two drives to max out the controller.
Is this essentially correct? Have I missed anything? Is there a good place that already explains all this ?!?
I guess to start, I'd need to consider the mobo chipset. These days, I'd probably want something with intel P45 or better to support PCIe 2.0. This gives a theoretical bandwidth of 500 MB/s per lane. Bandwidth from the NB to the CPU is 10.6 GB/s - not likely to be a bottleneck anytime soon!
Then, there's the issue of the motherboard layout and how many PCIe slots. Theoretical maximum bandwidth for an 8-lane PCIe 2.0 slot would be 4GB/s. The nicer dedicated HDD controllers these days want PCIe x8 slots, right?
Next is the controller card itself. The latest PCIe cards such as the Areca arc-1280 claim sustained RAID-5 read throughput of 811 MB/s. But this was in an 'internal testing' environment. Probably more conservative to assume somewhere south of that number as our pinch point, say 750 MB/s.
Down now to the drives and the choice of interface. Both SAS and SATA 3.0 Gb/s are two of the higher bandwidth interfaces, right? I'm not looking at FC - I'll leave that for the enterprise storage guys! Both of these are looking to double their bandwidth sometime in 2009 (or so say most of the documents I'm seeing), but for now 3.0 Gbs or about 375 MB/s.
We can have multiple drives connecting to these controllers, so it seems that if each SAS or SATA 3.0 Gbs drive could reach its theoretical max of the connection, it would only take two drives to max out the controller.
Is this essentially correct? Have I missed anything? Is there a good place that already explains all this ?!?