PCI-Express controller options?

AreEss

2[H]4U
Joined
Jul 5, 2005
Messages
2,180
I'm looking for more PCI-Express SATA and SCSI boards to look at, preferably 4x or 1x. I've evaluated the Ultra320-2E and the Areca PCIe8x so far. It seems like there aren't any controllers out there actually using PCI-Express yet, save LSI and Areca. Anyone seen any PCIe1x's or PCIe4x's they could recommend? SCSI or SATA - this disk-build is being done backwards.
 
You've basically coined it. Those are the big two with pci-e controllers out. Right now the server market is still using pci-x, which is where alot of the high end HBA's are. It's a proven technology where pci-e still is rather new compared to it. Probably a year or two more and you'll start seeing more disc based stuff for pci-e.
 
I never looked into the Sata stuff too much but the LSI card is about it for PCIE SCSI cards. You can find that same card sold as a Dell Perc4e or an Intel srcu42e. All three are still expensive but great performers. I'm still waiting to see a U320 PCIE adapter with no raid functionality but it looks like a long wait.
 
The 320-2E actually isn't a PCI-Express part. It's a PCI-Express slot, yes, but bridged back to 100MHz PCI-X. (Read; 320-2, take three. Not that I'm complaining.)
If it wasn't for experience for the HPT370, I might not violently retch when shown a RocketWreck.

Ah well, was hoping to have to avoid the PCI-X route, just because the K8WE has a moronic slot layout. Looks like I'm not going to get that lucky.
 
I'm interested too, but because I'm considering a new Powermac, and they switched to PCI-E....so the PCI-X boards aren't an option anymore (I want the Quad 2.5, not the last Dual 2.7).
 
I wouldn't expect to see a native PCI-E U320 SCSI controller out there. As far as I know, nobody is working on them. You will be able to find a few bridged solutions, though, with a couple more coming out soon. All of the ones I have seen operate at x4 or x8 width, but there may be some x1 controllers out there that I haven't seen.
 
UICompE02 said:
I wouldn't expect to see a native PCI-E U320 SCSI controller out there. As far as I know, nobody is working on them. You will be able to find a few bridged solutions, though, with a couple more coming out soon. All of the ones I have seen operate at x4 or x8 width, but there may be some x1 controllers out there that I haven't seen.

Actually, LSI has some native parts that are just a few months from production right now. The problem being, see previous sentence. ;)
The bridged solutions annoy me, in part because of the dramatic increase in heat. I would love to know how they can make a simple bridge so hot; an IOP332 isn't that hot, and it's got two full PCI buses before ATUs and such. But yah, no 1x SCSI parts - which is not much of a surprise. Doesn't take much to choke down a single lane, especially with real drives.

Really wish there was someone other than SillyIm and RocketWreck in a SATA 1x. I could cope with SATA-II if somebody decent made a single-lane that could at least do RAID1 decently.
 
UICompE02 said:
As far as I know, nobody is working on them. You will be able to find a few bridged solutions, though, with a couple more coming out soon.
AreEss said:
Actually, LSI has some native parts that are just a few months from production right now.
Guess who works for LSI? ;)

Edit: hint - not me.

 
unhappy_mage said:
Guess who works for LSI? ;)

Edit: hint - not me.

Haha, you're quite right there. I'm sure there are no native PCI-E SCSI controllers in the pipeline.
 
AreEss said:
The bridged solutions annoy me, in part because of the dramatic increase in heat. I would love to know how they can make a simple bridge so hot; an IOP332 isn't that hot, and it's got two full PCI buses before ATUs and such.

Yeah, the Intel IOP/bridges run very hot for some reason. PLX has some nice bridge-only solutions that may be making their way into the non-RAID controllers; they hardly put off any heat.
 
UICompE02 said:
Haha, you're quite right there. I'm sure there are no native PCI-E SCSI controllers in the pipeline.

PCI-Express RAID, no. PCI-Express SCSI, yes. Nothing concrete, but more than a little noise about it. Of course, there's the catch; oh boy, SCSI on PCI-Express, but the IOP doesn't do it, so no RAID. I'm not expecting to see anything concrete till 1Q06, and most likely it'll be pushed as an "integrated SCSI" solution with a ZCR option over PCI-X initially. (LOVE to know how they're gonna deal with that bus jump..)

UICompE02 said:
Yeah, the Intel IOP/bridges run very hot for some reason. PLX has some nice bridge-only solutions that may be making their way into the non-RAID controllers; they hardly put off any heat.

Well the Intel IOP33x's are at 800MHz, so do the math there. Even if they are at 5V, IIRC. Most of it's accounted for by clock, but moving away from the 3.3V i960's was a giant leap backwards in my opinion. (You could run those things in insane ambients without problem.) And they're not really bridges; bridging is just a function of the IOP. Dates back to the i960; an i960RD has two PCI buses before your ATU (Address Translation Unit) and such. The PCI bridging functions in them are so mature at this point that I would daresay it contributes almost zero to the heat. It's all the clock that does it.
What bugs the everloving crap out of me is that LSI's moved over to the IOP33x's. So I honestly do not know why they have a bridge. There is no doubt that what is on the 320-2E is a bridge, but LSI should be able to move from the 331 to 332/333 with native 8x PCI-Express without too much difficulty. I don't know why they didn't; maybe they didn't feel the firmware wasn't mature enough with regards to PCI-Express, maybe they don't want to shell out for the 332/333, I really don't know. Maybe there's pin differences - I honestly haven't reviewed the technical docs on the 332/333 as of yet. The 332/333's do have PCI-Express and PCI-X, so there's no question of being able to connect with the 53C1020 and 1030's, albeit at 133/64.

Getting back to products in the pipeline, there are PCI-E native RAID solutions coming based on the IOP332 and IOP333. I don't expect to see many, if any, till sometime next year. PCI-Express is a very different beast with regards to electrical signalling and such, and I suspect it's tripping some of them up severely. The other possibility is that the IOP332/IOP333 just doesn't have enough lanes, or has substantial programming changes which need to be adjusted to. Either way, the IOP's are out there and available, and really we're all just waiting for them to put it together on the cards. What exactly we'll see is anyone's guess. I would bet money that an updated 320-2E is in the works; same card, sans bridge. I'd also bet money that everyone else is working on the same things. And I'd also bet money that within a year we will see a 12x or 16x RAID controller or HBA. Most likely a 4Gbit+ FC-AL or a quad-channel U320 16x. The quad would likely utilize PCI-Express's happy-go-everywhere lane capabilities and a PLX switch to split it on card to 2x8 lanes. The FC-AL would most likely be a straight 16x non-bridged/switched part.

But that's mostly speculation on my part. I have to admit to the unlikely possibility of such a quad simply because nobody would want to put two IOPs on one card just because of cost. The FC-AL part, well, trying to predict Qlogic is not unlike herding cats.

And no, I do not work for LSI. If I did, you can bet I'd abuse the hell out of employee discount. ;)
 
AreEss said:
And no, I do not work for LSI. If I did, you can bet I'd abuse the hell out of employee discount. ;)

I think what unhappy_mage was trying to get at is that I work for LSI Logic, specifically in the storage components area... Although I can't get into too much detail about upcoming products, I can guarantee there are no native PCI-E parallel SCSI controllers in the pipeline. Serial Attached SCSI, of course, is a completely different story. Also the RAID controllers may use a native PCI-E IOP, but they will always have to have a bridge chip to talk to the PCI-X SCSI controller. (Also, no employee discount, unfortunately... :( )

but LSI should be able to move from the 331 to 332/333 with native 8x PCI-Express without too much difficulty.
Well, the problem there is they don't have a native PCI-E SCSI chip to put on there. Nor do they have a parallel SCSI U320 chip with a built in IOP (at least not one that made it to market)

All of the focus of development for the major storage companies has moved to Serial SCSI, which at first will be PCI-X, but are very soon available in PCI-E. You sound like you'd especially be interested in LSI's SAS1078, which is a single chip PCI-E x8, 8-port SAS controller with a built in IOP, XOR, and DRAM controller. Broadcom should also have a similar product coming out early next year. For parallel SCSI, though, there will always need to be a bridge chip involved (whether built into the IOP or separate) to work on PCI-E
 
UICompE02 said:
I think what unhappy_mage was trying to get at is that I work for LSI Logic, specifically in the storage components area... Although I can't get into too much detail about upcoming products, I can guarantee there are no native PCI-E parallel SCSI controllers in the pipeline. Serial Attached SCSI, of course, is a completely different story. Also the RAID controllers may use a native PCI-E IOP, but they will always have to have a bridge chip to talk to the PCI-X SCSI controller. (Also, no employee discount, unfortunately... :( )

Ah, hee. Like I said; lots of noise, nothing concrete. SAS wouldn't surprise me either. But anyways, onto the IOP..

Well, the problem there is they don't have a native PCI-E SCSI chip to put on there. Nor do they have a parallel SCSI U320 chip with a built in IOP (at least not one that made it to market)

All of the focus of development for the major storage companies has moved to Serial SCSI, which at first will be PCI-X, but are very soon available in PCI-E. You sound like you'd especially be interested in LSI's SAS1078, which is a single chip PCI-E x8, 8-port SAS controller with a built in IOP, XOR, and DRAM controller. Broadcom should also have a similar product coming out early next year. For parallel SCSI, though, there will always need to be a bridge chip involved (whether built into the IOP or separate) to work on PCI-E

That's why I'm scratching my head the most. A PCI-E SCSI HBA is unnecessary on the 332/333, because they retain the PCI-X bus. It is possible to use the 332/333 as a bridge itself, same as is currently done. The only obvious catch I see is that the firmware now has to deal with all of PCI-X itself, which is the case with LSI's opaqued bridge method anyways. So really, should be very little that actually needs changed on the controller itself. e.g. The 320-2E could easily be revised sans bridge by moving to the 332/333.
Unfortunately, the fact is that SAS is not going to be a viable option for workstations for a while still. SAS drives are just about as available as 4GFC - e.g. not at all available. Customers aren't asking for it, which is fine by me, because I can't reasonably offer it even with an SAS1078. U320 and SCA2 is going to be king for a good long while, so really, LSI and everyone else is leaving a pretty significant hole in the market. Especially since most of my customers with arrays are committed to FC-AL or U320, and only plan to upgrade the attached systems anytime soon. I honestly can't imagine it would be hard to modify the 53C1030's to PCI-Express, save the ZCR portion. And the ZCR portions should benefit from PCI-Express significantly, so it makes even less sense to me to not invest the resources to do it. (Especially when you take into account just how bad the 320-0's are; the numbers are below even single-drive IDE in many situations.)
 
AreEss said:
I honestly can't imagine it would be hard to modify the 53C1030's to PCI-Express, save the ZCR portion. And the ZCR portions should benefit from PCI-Express significantly, so it makes even less sense to me to not invest the resources to do it. (Especially when you take into account just how bad the 320-0's are; the numbers are below even single-drive IDE in many situations.)

Actually, that is way more difficult than you can imagine it seems. Not only is there a great amount of design effort required to move to a new front-end bus interface, but you have to keep in mind the amount of verification, validation, characterization, and customer qualification that would have to go on for a completely new chip. In this case it makes much better financial sense to go with a bridged option -- especially since in this part of the industry being first to market with next-generation controllers is so important, all new development resources are concentrated on SAS development rather than the limited market that parallel SCSI would offer. The demand for PCI-E SCSI controllers is a lot smaller than it would take to recover the cost of a new chip design effort.
 
UICompE02 said:
Actually, that is way more difficult than you can imagine it seems. Not only is there a great amount of design effort required to move to a new front-end bus interface, but you have to keep in mind the amount of verification, validation, characterization, and customer qualification that would have to go on for a completely new chip. In this case it makes much better financial sense to go with a bridged option -- especially since in this part of the industry being first to market with next-generation controllers is so important, all new development resources are concentrated on SAS development rather than the limited market that parallel SCSI would offer. The demand for PCI-E SCSI controllers is a lot smaller than it would take to recover the cost of a new chip design effort.

That's why I don't understand why LSI isn't looking at PCI-E SCSI. I do understand that there is a lot of FEB validation and EDA hell to go through, but the 53C1030's backend shouldn't need any major changes that I can forsee. And a bridge option just isn't going to work on most motherboards or integrated solutions. The integrated market is going to have a gap without a PCI-E SCSI controller, especially with the demand for cheap ZCR. Sure, you can still hang onto the PCI or PCI-X bus, but for how long? Sooner or later, it's going to not be there, and there's going to be one hell of a market gap. I'd be tossing a fair amount of resources on it so it'd be ready say around 2H06 if not sooner. Sooner is better, just because you can boast about your special integrated solution, presuming ZCR's available.
It also has the potential to make ZCR a viable option for performance with PCI-Express' new toys, which would not only be a first but be highly advantageous. But at the same time, I don't see LSI going hard after the ZCR market if only because ZCR still sucks and always will, and LSI has always tended towards mid to high end. I'm still a little surprised at the sudden increase in 53C1030 integration, but considering AIC79xx failure rates, it's more "why did it take them this long to get a clue?"

Lemme PM you about the current 320-0 issues I'm aware of; maybe you can offer some insight on them. I have a pretty good hunch, but it'd be nice to confirm.
 
AreEss said:
That's why I don't understand why LSI isn't looking at PCI-E SCSI. I do understand that there is a lot of FEB validation and EDA hell to go through, but the 53C1030's backend shouldn't need any major changes that I can forsee.

Well, if you were as intimitely familiar with the internal architecture of the chip, you probably wouldn't be saying that it was a fairly simple change to make.


The integrated market is going to have a gap without a PCI-E SCSI controller, especially with the demand for cheap ZCR.

The issue with that is that ZCR demand relative to other RAID options or regular direct-attached controllers is very, very, very small. Therefore, it doesn't make sense to put many resources on trying to capture a market like that.

I'm still a little surprised at the sudden increase in 53C1030 integration, but considering AIC79xx failure rates, it's more "why did it take them this long to get a clue?"

I'm not quite sure I understand what you're saying here...
 
UICompE02 said:
Well, if you were as intimitely familiar with the internal architecture of the chip, you probably wouldn't be saying that it was a fairly simple change to make.

Well, relatively simple in my eyes. That doesn't mean it's simple period. But the 53C1030 is the gold standard in the SCSI backend, and I can't see any reasons to change it at all. Well, unless there's bugs hiding in there that I'm not aware of. ;)

The issue with that is that ZCR demand relative to other RAID options or regular direct-attached controllers is very, very, very small. Therefore, it doesn't make sense to put many resources on trying to capture a market like that.

To some extent, yes. But I see the market picking up significantly in the medium to long term, especially for use in HPCs and blades. People are going to be looking for ultra-low cost RAID solutions that fit into 1U, and ZCR seems to fit the bill the best.

I'm not quite sure I understand what you're saying here...

I've been building systems for well over a decade. All motherboards with Adaptec onboard have a fatal flaw; the way it has to be integrated. The AIC79xx's never go more than three years, no matter what, and usually not half that with significant load on them. This isn't based on hearsay either; back in the age of PentiumIII, I made the mistake of a systems with AIC79xx onboard. Every single one came back in less than six months as dead.
Every recent board I've seen has been LSI 53C1030 onboard, zero Adaptec offerings. And I highly doubt it's because Adaptec just doesn't want the market.
 
Back
Top