Intel Unveils 10 Gigabit NIC at Interop

Terry Olaes

I Used to be the [H] News Guy
Joined
Nov 27, 2006
Messages
4,646
At Interop in Las Vegas this week, Intel unveiled a 10Gb network interface card in support of the 10GBASE-T standard. The new device, which is backwards-compatible with 1Gb, can transmit up to 10Gb over copper using CAT6a cables at lengths up to 328 feet. Can’t wait for this to make its way onto enthusiast motherboards.

In addition, the device contains Intel's virtual machine device queues technology -- or VMDq -- that handles sorting of data among software-running virtual machines. By offloading the sorting to the NIC, rather than a virtual machine monitor, more data can move much faster, Schultz said.
 
10Gbps...my file server that I keep my "linux isos" dont come close to saturating my gigE connection at home or even at lans since my drives are either non-raided sata/ide or plain old usb2.
 
10Gbps...my file server that I keep my "linux isos" dont come close to saturating my gigE connection at home or even at lans since my drives are either non-raided sata/ide or plain old usb2.

Exactly what I was thinking, I'm maxed out on my gigE network at home. Maybe it will mean something in another ten years if our mass storage is on something significantly faster than current hard drives. Something like SSD, I dunno.
 
Yeppers... gigE is much faster than my server can cope with. Although, it is a nice step up from 100mbps.
 
:drool: 10 gigabits/s

I finally bought a gigabit switch today for home. :cool:
 
Ooh perhaps when I get around to finally upgrading my network, these will be affordable! :p

But seriously, anyone know when this is going to be put in an expansion card, and if so, what interface? I'd assume PCI-e 8x but the article makes no mention. I want to make my fileserver future-proof :p
 
Guess we'll have to wait for HD tech to catch up on read/write speeds since even those really expensive SSDs only top out in the GigE range. Maybe you could use a 10Gig if you had a large array of SSDs but that would be prohibitively expensive.... :p
 
Wow....10Gb is very much overkill for any home network. The only application I see this being useful is in the data center/internet.
 
Wow....10Gb is very much overkill for any home network. The only application I see this being useful is in the data center/internet.

That's what they said about gigabit 10 years ago.

These days, gigabit leaves a bit to be desired. Especially when talking switch to switch interconnects.
 
That's what they said about gigabit 10 years ago.

These days, gigabit leaves a bit to be desired. Especially when talking switch to switch interconnects.

At home man, at home. If you need 10Gb at home, you either need to move the "business" out of the house, or just go outside and enjoy life.

The only thing I've seen get close to maxing Gb is multiple transfers of very large files. You can use 10/100 for web servers with hundreds of sites on them. So yeah...10Gb at home is nuts and way more than anyone needs.
 
Any serious RAID array with a few PCs on the network could saturate a 1GbE connection. Don't ask me how I know...
 
SMC has done this almost a year ago :) Move along...
 
Guess we'll have to wait for HD tech to catch up on read/write speeds since even those really expensive SSDs only top out in the GigE range. Maybe you could use a 10Gig if you had a large array of SSDs but that would be prohibitively expensive.... :p

What, like this? :D
 
I think quite a few of you are missing the point - they probably aren't even aiming at the enthusiast/home market at the moment. The main use of this technology in the near future will be in enterprise and business environments for servers and back-haul links that currently need fiber to achieve those kinds of speeds.
 
I think quite a few of you are missing the point - they probably aren't even aiming at the enthusiast/home market at the moment. The main use of this technology in the near future will be in enterprise and business environments for servers and back-haul links that currently need fiber to achieve those kinds of speeds.

Actually these cards would of been nice for me about a year ago, a network I worked on had some Cat6 elements, getting cards to enter on these and making sure the PC's on this element were using the correct NIC was a pita since the prior dude didn't mark them properly, since this new card has backwards compatability it wouldn't of really mattered, and also is bringing the price of the equipment itself drastically down.

What, like this?

requires 2500watts? ahah holy shit!
 
I think quite a few of you are missing the point - they probably aren't even aiming at the enthusiast/home market at the moment. The main use of this technology in the near future will be in enterprise and business environments for servers and back-haul links that currently need fiber to achieve those kinds of speeds.

I can definitely see this on a larger network infrastructure at work, but at home it would be pointless on some of our slower machines. ;) I can't wait until hits mainstream though.
 
Guys -

A 16 drive raid6 with a tekram sata raid can easily push through 800MB/s sustained. To build one of these is not expensive and that rate can be pushed over a 10G channel.

This of course requires that linux be the server OS. You'd be lucky to sustain 200MB/s on the same hardware with anything microsoft puts out on either the 10G or the controller.
 
Guys -

A 16 drive raid6 with a tekram sata raid can easily push through 800MB/s sustained. To build one of these is not expensive and that rate can be pushed over a 10G channel.

This of course requires that linux be the server OS. You'd be lucky to sustain 200MB/s on the same hardware with anything microsoft puts out on either the 10G or the controller.

You are being sarcastic right? There are like maybe 20 people on this forum that have the at home (Ockie, PS-Rage, and couple others come to mind).

Not expensive? 16 drives....if you used 250GB drives you are looking at $1040 (Used an average price of $65 for each). Not to mention the rest of the damn system.

Like I said...it is overkill for this to be implemented at home.
 
Guys -

A 16 drive raid6 with a tekram sata raid can easily push through 800MB/s sustained. To build one of these is not expensive and that rate can be pushed over a 10G channel.

You'd need an array at each end and 800MB will not saturate 10Gbit... not even close. What would you transfer? You'd do one transfer or perhaps two and you'd be out of ideas. A 16 drive array is puny in comparison to the 10g card, you are best off with a regular good ole gig card.


This of course requires that linux be the server OS. You'd be lucky to sustain 200MB/s on the same hardware with anything microsoft puts out on either the 10G or the controller.

LMAO. You are kidding me right?
 
Back
Top