PCI-E verses standard RAM

keithallenlaw

Limp Gawd
Joined
Oct 4, 2008
Messages
238
I was tossing around the idea of getting a PCI-E based memory
set-up. I want to set up my OS and my games from there. But I
read about slow booting issues and windows compatibility problems.
Is the trouble and results worth it? Or am I best off getting 24gigs
of ram and loading my games from that?

What about getting a raid controller and running about a 1/2 dozen
SSD's in Raid 0? Only problem with that is going back through the
southbridge which the previous by passes.

Any thought? Thanks!
 
You mean PCIe based SSDs. Not ram I hope. I was going to say not much you can fit on a 4 or 8 GB drive after the OS install.
 
i honestly dont know what you could be doing professionally to warrant the need for that quick of a response from your OS.
 
Buy two of the pci-e cards and send one to me, since you are apparently floating in cash.... :)
 
If you run a bunch of SSD's in RAID 0 you will run into heat issues if your not using a discrete raid card. You can burn up your board with the throughput.

PCI-E SSD seems like the best option, but not sure about the compatibility. I'm sure it depends on the BIOS and OS.

Another alternative is two X25-E drives in RAID 0. I have this setup and similar performance to that PCI-E card, but higher cost and lower storage space. Advantage would be compatibility. I would probably give PCI-E cards a shot though.
 
If you run a bunch of SSD's in RAID 0 you will run into heat issues if your not using a discrete raid card. You can burn up your board with the throughput.

I find that hard to believe. SSD traffic on the northbridge is not as much as video.
 
The OCZ RevoDrive is about as fast as storage gets, but it's pricey. As for overheating and burning up boards by running SSD's in RAID 0, that is the dumbest thing I've heard. You realize there is just data passing over the ports and no heat increase is noticeable at the board level from using a SATA input?

Most dedicated RAID cards use a RAID controller similar to or the same as many boards use.
 
If you run a bunch of SSD's in RAID 0 you will run into heat issues if your not using a discrete raid card. You can burn up your board with the throughput.

PCI-E SSD seems like the best option, but not sure about the compatibility. I'm sure it depends on the BIOS and OS.

Another alternative is two X25-E drives in RAID 0. I have this setup and similar performance to that PCI-E card, but higher cost and lower storage space. Advantage would be compatibility. I would probably give PCI-E cards a shot though.

That's not true at all. Explain how running RAID 0 on a motherboard does ANYTHING to burn up the board? We're talking data transfer here on a non-moving part that doesn't process any information. MAYBE you could burn up an SATA controller by doing something dumb, but you are very misinformed that RAID 0 can damage a motherboard.
 
We have caught a board on fire with as few as 4 enterprise class drives in RAID 0 running continuous IOPS testing here in the lab at Intel. With active cooling and depending on the board you may not have issues, but I wouldn't risk it.
 
Do you believe a raid controller can hold up better?

We have caught a board on fire with as few as 4 enterprise class drives in RAID 0 running continuous IOPS testing here in the lab at Intel. With active cooling and depending on the board you may not have issues, but I wouldn't risk it.
 
We have caught a board on fire with as few as 4 enterprise class drives in RAID 0 running continuous IOPS testing here in the lab at Intel. With active cooling and depending on the board you may not have issues, but I wouldn't risk it.

Which Intel office are you at? I can stop by and watch this happen, because I am calling bullshit.
 
I was tossing around the idea of getting a PCI-E based memory
set-up. I want to set up my OS and my games from there.

I dont think there's much performance difference for general OS usage or games when moving from a good quality SATA ssd to a pci-e based storage solution. There's definitly a difference for some server applications though. If its just for games and OS, i wouldn't bother with anything much more expensive than a quality sata ssd.
 
We have caught a board on fire with as few as 4 enterprise class drives in RAID 0 running continuous IOPS testing here in the lab at Intel. With active cooling and depending on the board you may not have issues, but I wouldn't risk it.

Is this an Intel employee telling us not to try to use 4 SSDs on the Intel ICH controllers? :confused:
 
There's some anecdotal info over on the OCZ forums about the ICH controllers, especially 9 & 10, getting REAL hot (95-100C and up) when you would stick >4 Vertex drives in RAID 0 running IOMeter. I don't recall anyone having them blow up though...

If memory serves, all were from users with no controller heatsink, or some little tiny one, and the problems went away if there was any airflow, so it's really more of a system-builder issue than an Intel one (unless Intel's design specs tell people that a heatsink is not required).
 
i honestly dont know what you could be doing professionally to warrant the need for that quick of a response from your OS.
Just depends on who you are. Unless you're already rich, there are probably people who get paid more per second than you can make per hour.
 
I was tossing around the idea of getting a PCI-E based memory
set-up. I want to set up my OS and my games from there. But I
read about slow booting issues and windows compatibility problems.
Is the trouble and results worth it? Or am I best off getting 24gigs
of ram and loading my games from that?

What about getting a raid controller and running about a 1/2 dozen
SSD's in Raid 0? Only problem with that is going back through the
southbridge which the previous by passes.

Any thought? Thanks!

Why not look at an SSD-based SAN?

Use 10GbE dual-pathed iSCSI to the SAN, and then you can add SSDs to your heart's content.
Dell has the Equalogic units that are pretty good. There are also other SSD SAN units out there, just look around.

We have caught a board on fire with as few as 4 enterprise class drives in RAID 0 running continuous IOPS testing here in the lab at Intel. With active cooling and depending on the board you may not have issues, but I wouldn't risk it.
oOoOoOoOo, enterprise-class! Were they SAS or Fiber Channel?
 
oOoOoOoOo, enterprise-class! Were they SAS or Fiber Channel?

Cute... I've been wondering about that though. Why aren't there more SAS SSDs? I've not come across any that are available through retail channels. Only a few ridiculously-priced ones made for specific OEMs.
 
Links please?

Why not look at an SSD-based SAN?

Use 10GbE dual-pathed iSCSI to the SAN, and then you can add SSDs to your heart's content.
Dell has the Equalogic units that are pretty good. There are also other SSD SAN units out there, just look around.


oOoOoOoOo, enterprise-class! Were they SAS or Fiber Channel?
 
Thanks, but this looks like server and industrial components. I couldn't find how these hook up.
PCI-express is the fastest direct avenue to the mobo. If these things hook up any other way then
your going through other controllers on the mobo. i.e. bottlenecks.

http://www.equallogic.com/products/default.aspx?id=9503&WT.svl=hplink3

Dell will sell you the dual pt 10GbE CNAs

just call and talk to a sales guy, they would be happy to give you a quote.

TMS sells the RAMSAN
http://www.serversupply.com/products/part_search/pid_lookup.asp?pid=118224

etc.

Don't get me wrong... these will be RIDICULOUSLY EXPENSIVE.... but they scale better than tacking PCI-E SSDs onto a desktop.
 
Last edited:
Keith, don't believe the marketing hype regarding OCZ's PCI-E SSD products. These products are essentially two or four typical MLC NAND flash SSDs in a RAID0 array. The SSD controllers and the RAID controller talk SATA to each other. The RAID controller in these products is actually designed for PCI-X, which is then converted to PCI-E with a bridge chip. I'm not sure how OCZ believes they can market any sort of "reduced latency" claims against any other setup that uses discrete RAID hardware, because they are essentially the same thing if not worse. Again: the RevoDrive does not use native PCI-E controllers. The path is Flash -> SSD controller -> SATA -> RAID controller -> PCI-X -> bridge chip -> PCI-E, which is not what I would call "the fastest avenue to the mobo".

Furthermore, I'm not sure where you got the idea that these SSDs use RAM. They use Flash, which is much much slower in every way, with the added benefit of retaining data when the power is off.

edit: The RevoDrive is not a bad product, really. It provides a huge amount of performance for (relative to other exotic solutions) low cost. But it doesn't have any significant advantage over buying regular SATA SSDs and using integrated RAID. I wouldn't be surprised if sometime in the future we see more native PCI-E SSD controllers, but at the moment there aren't any available for the consumer market.

edit2: If you are actually planning on simply buying 24GB of system memory, keep in mind that RAM loses its content when the power turns off. If you'd like to load your games from RAM you'll need additional software (such as RamDisk) that copies your files to a "virtual" volume every time you boot.
 
Last edited:
I've always fancied one of these...

http://techreport.com/articles.x/16255

Ram based with the added advantage of battert back up and push button backup to card

That is what I consider a RAM drive. The big problem with this is the cost of ram is too high and still at 32GB the size is limited.

I would like to see more hybrid options that involve batteries. A 4 to 8 GB DDR battery backed read / write cache on a SSD would be awesome.

Os support for read / write cache devices would also solve this problem. I believe zfs has the ability to add a read cache device. I am not sure if there is any write cache support.
 
Hear hear. I concern.

PCI express SSD still looks sweet, that is if a rich relative kicks the bucket. :p

That is what I consider a RAM drive. The big problem with this is the cost of ram is too high and still at 32GB the size is limited.

I would like to see more hybrid options that involve batteries. A 4 to 8 GB DDR battery backed read / write cache on a SSD would be awesome.

Os support for read / write cache devices would also solve this problem. I believe zfs has the ability to add a read cache device. I am not sure if there is any write cache support.
 
I am not sure if there is any [ZFS] write cache support.

ZFS has two caching mechanisms:
1. The ZFS Intent Log (Write Intent Caching)
2. The ZFS L2ARC Cache (Read Caching)

The ZFS Intent Log (ZIL) basically gives you delayed allocation. You want your high iops drives on the ZIL cache and your high throughput drives on the L2ARC Cache.
If you put Intel X25-Es on the ZIL cache, be sure to disable their volatile onboard cache... otherwise you may contemplate suicide at a later date.
 
Back
Top