How to get into SCSI without being capped by the PCI bus?

USMC2Hard4U

Supreme [H]ardness
Joined
Apr 4, 2003
Messages
6,157
Weather Or not I need SCSI speed or not is irrevalent. I am a Enthusiast and want to spend a shit load of money. So please dont try and talk me out of it, this is a Project I want to get into.

I would like to know how I can go about setting up a good SCSI 320 Raid Array in my Desktop without being limited to the PCI bus of 133MB/s . Any good PCI-e 1x Cards out there? or 4x?

Thanks
 
AFAIK, no SCSI people have announced non-SAS PCI Express cards at this point
Only way to get more than 133mb/s would be to get a system with 64-bit 66mhz PCI slots, or even more bandwidth PCI-X slots
However, what drives do you plan on using in this array? Any RAID?
 
I want to get 2 15K RPM Drives, I dont know what brand yet. U320 SCSI. I want a Raid 0 array. Maybe even a third drive for Raid 5.
 
If you're looking at current (Cheetah 15k.4, Fujtisu MAU, Maxtor Atlas 15K II, Hitachi 15K147) generation 15K drives or even last generation (15k.3, MAS, 15K, 15K73), in a RAID-0 array they will easily saturate 32bit/33mhz PCI. If you're looking for any kind of fast writing at all, RAID-5 is not the way to go.
Also, are you trying to get a RAID card with a dedicated processor to take all the load from RAID off your CPU? You're basically stuck with a PCI-X card and will need at least a 64-bit slot to avoid bottlenecking it either way.
 
What about this card?

http://www.intel.com/design/servers/RAID/srcu42e/index.htm

Its a U320 SCSI Raid High End Card with PCIe 8X Bus. Its about 700$ Retail which is no big deal to me.

I know its Intel, but would it work in my AMD X2 system? If I have say a high end Video card in my Mobo via the PCIe16x slot, could I use the 2nd PCIe16x slot for this? Or is it only meant for SLI video cards?
 
It looks like a good card
spcialy if you fit it with the max 512mb of ram
your going to need 4-8 drives to saturate a 8x bus though....
though what you do with all that streaming data is beyond me

though using 4 15k drives for a raid 10 array would definetly make for one heck of a boot drive ;)

you could also use the second chanel externaly for some large 10k 72gb drives in an external enclosure
 
I suspect you'll be the first on your block with that card.....what a monster!
 
USMC, weren't you going to get the Tyan K8WE? That's got 64-bit PCI-X slots on it, and you'll be good to go on bandwidth - even a 66MHZ 64bit slot gives you 533MB/sec, more than U320 SCSI.
 
DougLite said:
USMC, weren't you going to get the Tyan K8WE? That's got 64-bit PCI-X slots on it, and you'll be good to go on bandwidth - even a 66MHZ 64bit slot gives you 533MB/sec, more than U320 SCSI.
Unfortunatly, I am no longer getting the Tyan K8WE motherboard. This is why I am asking about this issue now.

I will be getting a standard NForce 4 SLI Mobo, and an Athlon X2.

My Biggest question really is, If I get this card, will it work in my secondary "Graphics PCIe 16x" slot, If Slot 1 already has a Video Card in it.
 
You will need to enable SLI mode on the mobo, hope the BIOS doesn't puke out when it finds something other than a video card installed in the second X16 slot, and that Intel's drivers for the SCSI board allow it to operate on a non Intel system/motherboard. The first is a piece of cake. The second and third are big ifs.

First, I would contact your mobo maker and verify that the BIOS and chipset is capable of interacting with something other than a video card in the second X16 slot. It should - they are merely PCI-E lanes hooked into the northbridge/MCP like they are on the Intel boards that are officially supported, but I'm not an expert here. I would also contact Intel and enquire about this. They will definitely tell you it's not supported - but you only want to know if it will work ;).
 
are you looking for a certain capacity, or just the fastest access/throughput possible? because if it's speed, you'd be better off looking at that solid state card that premiered at computex. you could raid0 two of those with 4gb each and spend much less than the scsi setup with that expensive controller.
 
dualblade said:
are you looking for a certain capacity, or just the fastest access/throughput possible? because if it's speed, you'd be better off looking at that solid state card that premiered at computex. you could raid0 two of those with 4gb each and spend much less than the scsi setup with that expensive controller.
Read the second sentence of his first post. It explains everything.

This could be a very bad idea...dropping $700 on a controller that may or may not work. HUGE IFs by doing this and wanting to go SCSI. Go or stay with a Raptor or two and be done with it.

I think you need to worry less about your e-penis and either do it right or don't do it at all. RAID5 in a SCSI setup sounds like an ideal situation but storing porn and games on that is perfectly stupid...so build an SATA array for that and worry about either a single or dual 15k disks for your OS, nothing else. Or do a 0+1 setup.
 
Adaptec 2230SLP SCSI RAID card - $580
Fujitsu MAU3036NP 36GB 15K SCSI disks - $460 for a pair

Thats a little over a grand for a very fast SCSI RAID0 setup. The Fujitsu MAU is THE fastest hard disk around right now, and dual Raptors couln't think of hanging with them in a RAID0 setup.
 
tdg said:
Adaptec 2230SLP SCSI RAID card - $580
Fujitsu MAU3036NP 36GB 15K SCSI disks - $460 for a pair

Thats a little over a grand for a very fast SCSI RAID0 setup. The Fujitsu MAU is THE fastest hard disk around right now, and dual Raptors couln't think of hanging with them in a RAID0 setup.
Now wait a second. How is USMC supposed to integrate such a setup into his system, when he's not willing to accept a 32 bit PCI slot and his motherboard doesn't have PCI-X slots? The answer is of course that he's stuck with PCI Express - either the X8 SLI slot, or if he gets the DFI (NF4 SLI-DR) board, he can use the X4 slot and not have any X1 devices. Also, a pair of 36GB Fujitsu MAU drives can be had for $414 on ZZF. I would suggest understanding the OP's goals before throwing stuff out ;)

Furthermore, USMC has already taken in the debate on what drives he should buy. I suggested the Raptor was a better choice when I was a mere n00bie here and this build first came up, and I quickly learned that USMC was willing to throw down the cash to go faster than that, even if the costs do rise dramatically and returns diminish even faster once you get past the Raptor. Perhaps it would be wise to review the [thread=871327]thread[/thread] where USMC decided to go SCSI, before bashing his decision again, particularly #12:
USMC2Hard4U said:
I just want something that is the Fastest. Something that is the Most High End that there is. It will be a single User enviroment, and perhaps 1 other user connecting to My Hard drive to access things, occasionally. Thats about it. I just want the fastest Hard Drive Possible. If 2 Raptors in Raid can out do the SCSI drive in this situation, then thats what I will do. But I dont care about price/performance. I want the best, even if its way too much money.
He was not even dissuaded by this in #17:
DougLite said:
But WD740GD sits at a convergence of performance, reliability (5 year warranty) and ease of integration (cooler, quieter, and easier to setup than SCSI sounterparts) that SCSI will never reach, at any price.
 
EnderW said:
Um, I think you need to read up on the differences between RAID levels
http://www.storagereview.com/guide2000/ref/hdd/perf/raid/levels/index.html
http://www.storagereview.com/guide2000/ref/hdd/perf/raid/levels/singleLevel5.html
http://www.storagereview.com/guide2000/ref/hdd/perf/raid/levels/singleLevel5.html said:
Hard Disk Requirements: Minimum of three standard hard disks; maximum set by controller. Should be of identical size and type
I dunno, it's a lot of money. I would get a dual processor system with PCI-X slots or wait a while before doing all of this.

Hell, even having a U160 controller (29160) and one 15k drive (Cheetah 15k.3) is good enough for me...access times are where it's at, like you've said DougLite.
 
Well What I need and why I need it is for my own personal expirimenting and stuff. I want U320 Raid 0, weather I need it or not, I dont care. This is what I want to do to expand my knowledge with SCSI and have a fast ass hard disk system in my personal Computer. My only question was how to get SCSI without it being slowed by the PCI bus. Well, their are a number of PCIe cards out their now that solved my problem. I just need to find out which motherboards will allow me to run one of these cards while I am running a PCIe Graphics card.

And for whoever said its alot of money, its my money to spend. I am spending no less than 6000$ on this system. Dual Core Vapo chill and all that. why have a sweet system with some slow ass hard drives? no thanks. I am going all out this time.
 
Perhaps I missed this a while ago, but why are you going for a single-processor machine instead of a duallie? Not trying to be a jerk, just wondering. The k8we is a fairly awesome board, and it'll definitely do the video card/scsi card thing. And 42 whole pciE lanes. And with a $6k budget, the board is well within reach.
 
That is a good point. I believe there are SLI capable server motherboards that have support for things like PCI-X as well.
 
Fuck Raptors. Of course the 16x PCI-E slots can be used for anything. And of course the Intel RAID card would work on a non-Intel system. Why would you guys even think otherwise?
 
If you can afford it. get the card and enjoy it. PCI X is what i was running prior to this year. Throughput was crazy. no load times. no seek times. just access.

i run a 15K rpm as you can see in my sig- its quieter than i initially thought. FDB are good.

for a card with onboard CPU and ram, aim for 256MB ram....
 
dandragonrage said:
Fuck Raptors. Of course the 16x PCI-E slots can be used for anything. And of course the Intel RAID card would work on a non-Intel system. Why would you guys even think otherwise?

Even though they can be used for anything you usually only have 2 of them which this user already want to use for SLI. Without going with a server motherboard that doesn't leave him with many options for plugging in a high-end scsi card.
 
I will deffinatly get this LSI card. I am going to spend 6000 USD on this system, so Money isnt really a conern. I just want to play with SCSI Raid for my box.

I dont know If I will be able to get it to work tho. I dont know enough about PCIe lanes and SLI motherboards... I have another thread here expalining this. Maybe some of you can help?
http://www.hardforum.com/showthread.php?t=913553
 
On the Linux workstations at work, the motherboards have dual Adaptec controllers built in. Meaning you can use 12 SCSI items (controllers occupy channels 7 & 14). The boards have PCI-Ex1 and x16 and dual Xenon's. But they use that ECC memory crap, so that adds to the cost. I can't remember the name of the mobo company right now, but it would allow and integrated option and no cards to take up room. The board also has traditional IDE, as well as SCSI RAID.
 
USMC, im 90% sure that only the first x16 slot on SLI boards is a PEG (PCIe Graphics) slot and that hte other slot can be used for anything you please - AFAIK, NF4 gives something like 20 lanes, right? so with 16 going to the PEG and the other 4 going to the SCSI card, you should be fine.. you should even be able to put the RAID card in an X1 slot or X2 slot and it hsould have plenty of bandwidth for your purposes.
 
An X8 card won't fit in an X1-X4 slot unless it's open-ended. It will work in a physical X16 slot, it will just be X8 electrically and logically.

The NF4 does have 20 PCI lanes. 16 for graphics, and flipping the SLI selector on the board splits those into two X8s. The other four lanes are allocated to peripherals as the mobo maker wishes. They may be X1/X2 slots, PCI-E integrated peripherals, or an X4 slot. In the case of the DFI NF4 boards, you may use the X4 slot OR the X1 slots, not both. However, the X4 slot on the DFI boards, at least according to the picture on their website is a closed one that will not accept a connector larger than X4.
 
Well what about the PCIe slots on the K8WE. They are both 16x Fully. Are they for graphics only too?
 
An X8 card won't fit in an X1-X4 slot unless it's open-ended. It will work in a physical X16 slot, it will just be X8 electrically and logically.
i know.
However, the X4 slot on the DFI boards, at least according to the picture on their website is a closed one that will not accept a connector larger than X4.
this is what dremels are for :)
USMC2Hard4U said:
Well what about the PCIe slots on the K8WE. They are both 16x Fully. Are they for graphics only too?
i believe only one is a PEG slot, but you might wanna check documentation for htat...
 
If you actually want speed, don't do just two fast SCSI drives.

4 or more 7200 SATA drives in good RAID-0 will easily be faster than just 2 of the best SCSI drives. I forgot the vendor but for IIRC $250 you can get a 8-port SATA card for PCIe. For PCI-X it's even cheaper.

Now, if you want decent writes performance without turning the write cache on the drive on, that would be a different matter.

You seem to be confused about RAID options.
 
Martin Cracauer said:
If you actually want speed, don't do just two fast SCSI drives.

4 or more 7200 SATA drives in good RAID-0 will easily be faster than just 2 of the best SCSI drives. I forgot the vendor but for IIRC $250 you can get a 8-port SATA card for PCIe. For PCI-X it's even cheaper.
In terms of burst speeds and linear reads and writes, sure. But in terms of random access, two 15K SCSIs will blow them out of the water every time. 3ms seek vs 15+ ms seek anyone?
 
Martin Cracauer said:
If you actually want speed, don't do just two fast SCSI drives.

4 or more 7200 SATA drives in good RAID-0 will easily be faster than just 2 of the best SCSI drives. I forgot the vendor but for IIRC $250 you can get a 8-port SATA card for PCIe. For PCI-X it's even cheaper.

Now, if you want decent writes performance without turning the write cache on the drive on, that would be a different matter.

You seem to be confused about RAID options.
RAID-0 is not the answer on the desktop. It is designed to increase sustained linear transfer performance, not localized seek performance that is the dominating factor in hard drive performance on the desktop. There are two ways to increase this performance: faster seeks and better buffer strategy to increase the chance that a request will be in the buffer and make a seek unnecessary. Also, RAID-0 performance woes on the desktop only become more acute as you add drives to the array, due to the increased impact of rotational latency as you add spindles and tie them all together to each request. It is really not a very wise idea.

Unfortuantely, this is a myth that continues to be propogated in enthusiast circles, but it won't be here - not without a fight. If you're doing workstation work, and need a fast scratch disk, RAID-0 makes sense. If you need killer disk to disk to tape backup performance, RAID-0 makes sense. However, it is NOT the answer on the desktop, and a single Fujitsu MAU3367 will blow away any ATA RAID-0 setup on the desktop. Why? Because transfer rate is insignificant when compared to other factors in application level desktop hard drive performance, and RAID-0 does _nothing_ to improve the other facets. Yes, it gives modest gains in XP bootup if you allow the XP boot optimizations to proceed. Yes, RAID-0 will shovel pr0n around faster than SLED. But it won't bring up Firefox faster, it won't load a level in FarCry any faster, and you accept the occasional modest gains in exchange for doubling, or in your example quadrupling, cost. USMC may be willing to throw down serious cash, but at the same time he's not stupid (and this remark is extraordinarily significant - here you have an ex-Solider acknowledging intelligence in a Marine ;)) USMC has been taking in information on what storage setup to buy for some time now, and I think he's on the right track. RAID-0 is probably not necessary for him, but a 15K SCSI drive definitely is, and no amount of ATA drives will touch it in pure performance, at any price. If you consider cost, ease of integration, or capacity, then SCSI falters, but USMC doesn't care about any of those things.
 
RAID-0 can improve seek times a lot if you carefully choose the interleave factor. Remember you have more heads over more platters which increases the statistical chance that a head is near where you need it. Close moves will still be worse, though, and if you random accesses are big enough each to blow the chunk sizes for the individual drives the advantages goes down to NIL. Still, if you carefully analyse what you do and benchmark it with different chunk sizes you can see a large speedup.

Of course you shouldn't do 4-disk RAID0 on your valid unbacked up data but I was assuming the properties of RAID are known to the original poster.
 
DougLite said:
RAID-0 is not the answer on the desktop...

That was one of the best posts I have read regarding Raid-0, excellent work Mr. Moderator.
 
RAID is inherently uncapable of improving read service time. The only RAID levels that can improve read service time are mirrored or hybrid mirrored levels with multiple simultaneous requests. In such cases, if the controller's BIOS is well designed, the controller will allow each mirror to service every other read request, allowing two read requests to be serviced simultaneously. Even then, that does nothing to mask the mechanical latency of the drives - it merely allows two read requests to be completed at the same time, the requests don't wait in line.

With RAID-0, all of the drives are tied to each request in a linear fashion. If the controller BIOS is smart and the drives are command queuing aware, multiple requests can be serviced in the order that is optimum for each drive. However, high queue depths on the desktop are rare, and the array will most likely end up servicing requests one at a time, and you are back to square one. Once again, there are two ways to reduce the impact of seek time and rotational latency on the desktop - reduce those times, or improve buffer strategy so that the drive is smart enough to have requests cached before they come in. RAID-0 does neither of these things - buying a faster or smarter drive does. This is why a 73GB Raptor will school 7200RPM RAID-0 setups on the desktop, despite the RAID-0 setup having a much higher sequential transfer rate.

Just as an experiment, I popped open UT2004 in a Window, had Task Manager in the background to monitor CPU use, and had the Windows Performance Monitor open to monitor average disk queue length. When loading AS_RobotFactory, the drive sat idle for a full 10 seconds once, and for over 5 seconds twice. During these time frames where the disk was idle, the CPU usage was running at over 70% for UT2004.exe, as the system cached data in memory, rendered textures, decided what needed to be loaded next, etc, etc. The point? The game could care less about sustained transfer rate, it just needs the drive to find any of a myriad of small files, often smaller than a RAID-0 block size, that must be decompressed and cached in memory before the level begins. Between each of those small files, one of two things happens: Either the drive must seek to it (usually only a few tracks), or the drive has it in the buffer and delivers it faster than any RAID-0 setup could ever hope to find it. Once again, limited by seek time and buffer strategy, not STR. This is your answer folks. This is why RAID-0 falters in game level loading. Why the Raptor rules. Why SCSI drives with low seek times and high transfer rates but poor buffer strategies get schooled by clunky 7200RPM ATA drives on the desktop. Why specs on a datasheet, or results in a synthetic benchmark, mean little to nothing in desktop hard drive performance.
 
aug1516 said:
As long as it will function like a normal LSI card and not have issues with non-dell computers it should be great.
There will be no problems with that. It's exactly the same as the non-Dell version of the board, but has some Dell-specific options in the boot BIOS. (as in, the things Dell wants the users to be able to configure, not things that are specific to Dell machines)
 
lithium726 said:
i believe only one is a PEG slot, but you might wanna check documentation for htat...

So here's what I've been trying to figure out, what is the difference between a x16 and a x16PEG slot? In the PCI-E 1.0a spec, there is nothing mentioned about graphics-only slots.
 
I hadn't heard of PEG either. The manual for the 2895 doesn't mention the term either. If it does exist, it seems a ridiculous idea. Why take a nice fast general use bus and require a specific peripheral to be used on that slot? What about the people who want 2 pci-E raid cards in theirs for huge amounts of storage?
 
Back
Top