Throughput: where are the pinch points?

coo-coo-clocker

Limp Gawd
Joined
Sep 6, 2007
Messages
183
With the bandwidth of dedicated SAS/SATA controller cards and the latest generation of drives climbing up and up, I began wondering where are the pinch points in the data path?

I guess to start, I'd need to consider the mobo chipset. These days, I'd probably want something with intel P45 or better to support PCIe 2.0. This gives a theoretical bandwidth of 500 MB/s per lane. Bandwidth from the NB to the CPU is 10.6 GB/s - not likely to be a bottleneck anytime soon!

Then, there's the issue of the motherboard layout and how many PCIe slots. Theoretical maximum bandwidth for an 8-lane PCIe 2.0 slot would be 4GB/s. The nicer dedicated HDD controllers these days want PCIe x8 slots, right?

Next is the controller card itself. The latest PCIe cards such as the Areca arc-1280 claim sustained RAID-5 read throughput of 811 MB/s. But this was in an 'internal testing' environment. Probably more conservative to assume somewhere south of that number as our pinch point, say 750 MB/s.

Down now to the drives and the choice of interface. Both SAS and SATA 3.0 Gb/s are two of the higher bandwidth interfaces, right? I'm not looking at FC - I'll leave that for the enterprise storage guys! :D Both of these are looking to double their bandwidth sometime in 2009 (or so say most of the documents I'm seeing), but for now 3.0 Gbs or about 375 MB/s.

We can have multiple drives connecting to these controllers, so it seems that if each SAS or SATA 3.0 Gbs drive could reach its theoretical max of the connection, it would only take two drives to max out the controller.

Is this essentially correct? Have I missed anything? Is there a good place that already explains all this :D ?!?
 
um, the drives themselves.
SATA 3Gbps is 3Gb/s ≈ 300MB/s.
No drive can hit above 150 (by itself) as is.

the drive is the bottleneck. *Doubling* the fastest drive available is gonna max a channel (barely), but nothing else is by itself.
 
So it's that easy?
A couple of specific questions then.
(1) assuming the mobo supports it, will you have problem running a big GPU (x16) and a x8 controller together? Are there either bandwidth contention or memory issues?
(2) If you load up several fast drives in your array, does the controller bandwith get saturated?
(3) I've read a lot lately about controller and drive firmware being oriented towards specific workloads. How much does this really matter? Are there any non-synthetic benchmarks you guys trust?

Thanks.
 
PCI-E is point to point. Every lane can be maxed out and they dont interefere. Bandwidth to the CPU is limited by the FSB, for current gen intels, around 10GB/s
at the controllers chip, depending on the array type. Depends on more than that.
depends on the drive, the workload, and how similar the workload is the to prepared setup.

WTF are you trying to do, anyway? If its general curiosity, read more. If its something specific, what it is?
 
The limiting factor is the controller card. My Areca 1280ML will do 750mb/s. You just need enough drives. With a single drive however, it is going to be the limiting factor instead. If you really do desire such high speeds, it is going to run you a couple thousand USD to accomplish this at minimum.
 
I will be setting up 4x15K SCSI on a 320-2e tonight and can see what kind of limits i get. If its not limited, I may pickup a few more of the 15K drives. (I have cabling room for 6 on each channel)

*Yes, the speed is overkill and I have no real need for it*
 
Your drive is the limit, with multiple drives your controller is the limit.
 
controller can be the limit, pending on how many drives and what drives you add to it.
 
Thank you for the useful replies.
I am trying to educate myself, and I am researching storage solutions for two new systems I am putting together: (1) a new (predominantly video editing) workstation and (2) a new home fileserver. In addition to the disk technologies, I've spent time reading about WHS, because I also am WAY overdue to get some backups going on my home network.
 
The operating system is a bottleneck.

In WindowsXP:

I can copy a 4GB file from one hard drive to another in 2 minutes.

Doing 2 copies at the same time takes much more than twice the time, 4 minutes. Each process wants to wait for the the other.

Anytime one gets near the physical transfer rate of the system the software makes poor scheduling decisions - when ever a decision needs to be made.
 
The operating system is a bottleneck.

In WindowsXP:

I can copy a 4GB file from one hard drive to another in 2 minutes.

Doing 2 copies at the same time takes much more than twice the time, 4 minutes. Each process wants to wait for the the other.

Anytime one gets near the physical transfer rate of the system the software makes poor scheduling decisions - when ever a decision needs to be made.

Yea I wish windows would do have like a queue sort of system for file transfers....Open up a lil dialog box so you could reorder them, force one to start, maybe even a pause would be cool.
 
Back
Top