Load Balancing

nitrobass24

[H]ard|DCer of the Month - December 2009
Joined
Apr 7, 2006
Messages
10,466
I have several network adapters in a makeshift "storage server" running xp pro is there settings or registry entries that can be adjusted to achive "load equilibrium" or third-party software?
 
not easily with random nics on xp. inel server nics allow for teaming but im not sure what else does. how many concurrent users are on this server? what's the io throughput the disks? raid? how many spindles? most likely you're not limited by network throughput ( situation where teaming would help).

A gigabit nic is likely to saturate your PCI bus before it saturates the network. Also, you need multiple disks to really saturate a gigabit connection.

If there are multiple users you could assign each NIC a different ip and have them access the server by different ips. But again, unless you have multiple sustained transfers from different disks, you'll likely be bottlenecked elsewhere.
 
well i have 100mb/s not gigabit ethernet cards and one is intel the other is linksys. I'm am running an ftp server and thats it. Im not have users connect through a domain or vpn. I have a 20gb ide boot drive, and two 400gb drives that are setup in raid0 stirpe to host all the files and data.
 
nitrobass24 said:
well i have 100mb/s not gigabit ethernet cards and one is intel the other is linksys. I'm am running an ftp server and thats it. Im not have users connect through a domain or vpn. I have a 20gb ide boot drive, and two 400gb drives that are setup in raid0 stirpe to host all the files and data.

Who are the ftp clients? Internet users or others on your home network?
 
The best solution is probably going to be a GbE NIC and a GbE switch -- esp. if you have any local network users with GbE NICs built-in already.

Decent consumer GbE switches can be as low as $35, and Intel GbE NICs as cheap as $21. The D-Link DGS-1005D is inexpensive and has a 1 MB buffer per device.

NIC teaming is an advanced topic. A server / multi-client use case is appropriate for dual NICs, but single-user scenarios are not -- each conversation only reliably goes through one NIC. Windows doesn't have teaming built-in (yet). Windows network load balancing is something else. Linux has bonding support built in.

Intel has teaming software that supports some of their NICs. I'm not sure how you get it. I think it's included with their supported NICs; perhaps just the server ones.

SysKonnect/Marvell also has teaming -- for their own NICs only. Downloadable.

Broadcom also has this -- for their and Intel NICs, however, it will support other NICs with warnings. You get this software with Broadcom NICs.

Among others...

In general, you also need a fancy switch to support NIC teaming properly.

A single GbE NIC is going to be better overall than multiple 100 Mb/s NICs; easier to set up, and give better performance and "future friendliness".
 
Every think about Teaming the NIC's?
Lol, just read the Above talking about Teaming, I would go with his idea's getting Gigabyte and Teaming the Cards
 
teaming/gigabit will offer no performance for internet users (if its a home connection, you'd be lucky to get half a megabit in upload). For LAN users, you'd probably noticed the biggest increase with gigabit nics/switches. Even then the increase might not be huge because you're only pulling from 2 spindles. Teaming would most likely be pointless.

-(Xyphox)- said:
Every think about Teaming the NIC's?
Lol, just read the Above talking about Teaming, I would go with his idea's getting Gigabyte and Teaming the Cards

not to be too much of an ass, but you've used gigabyte instead of gigabit in five threads now. it's an easy slip, but there is a difference.
 
da sponge said:
teaming/gigabit will offer no performance for internet users (if its a home connection, you'd be lucky to get half a megabit in upload). For LAN users, you'd probably noticed the biggest increase with gigabit nics/switches. Even then the increase might not be huge because you're only pulling from 2 spindles. Teaming would most likely be pointless.



not to be too much of an ass, but you've used gigabyte instead of gigabit in five threads now. it's an easy slip, but there is a difference.

lol, thanks.. today its the heat i cant type at all 105 here and its even hot with a/c...
 
While on the gigabit topic, don't forget about the advantages of jumbo frames.

Gigabit with standard frames of 1500 will only net you about 350Gbps, give or take in most common situations.

Gigabit with jumbo frames can usually saturate the connection!

But, in single HD configurations with large files you can't read or write that fast anyhow!
 
longblock454 said:
While on the gigabit topic, don't forget about the advantages of jumbo frames.

Gigabit with standard frames of 1500 will only net you about 350Gbps, give or take in most common situations.

Gigabit with jumbo frames can usually saturate the connection!

But, in single HD configurations with large files you can't read or write that fast anyhow!

I get 98MB/s sustained transfers without jumbo frames to a large raid array (iSCSI) over GigE. I'm not arguing that jumbo frames can't improve performance, but you can still get good speeds without them.
 
longblock454 said:
While on the gigabit topic, don't forget about the advantages of jumbo frames.

Gigabit with standard frames of 1500 will only net you about 350Gbps, give or take in most common situations.

Gigabit with jumbo frames can usually saturate the connection!
Here at work I can get the gig network up to 800Mbps with standard packets. Nothing crazy in use here, just an HP layer 2 switch connecting a few servers.
 
I am surprised! That is about 780Mbps!

Raptor 150s can't even ready that fast! What is your hardware setup?
 
longblock454 said:
I am surprised! That is about 780Mbps!

Raptor 150s can't even ready that fast! What is your hardware setup?
Not sure who that was direct to, but 800 was with a software bandwith tester. I can reach 400-500 when reading from the SAN, which has 10 SCSI drives in a RAID 50 configuration.
 
Vette5885 said:
Not sure who that was direct to, but 800 was with a software bandwith tester. I can reach 400-500 when reading from the SAN, which has 10 SCSI drives in a RAID 50 configuration.

Your gigabit to the SAN with an MTU of 1500?
 
This is a bit of a derail isn't it? "Load Balancing" is a perfectly good topic in itself. The jumbo frame myths shouldn't have to be dealt with here.

And BTW, I don't use jumbo frames, and have hit over 90 MB/s actual file transfers and over 115 MB/s TCP/IP throughput with consumer gear.
 
Madwand said:
This is a bit of a derail isn't it? "Load Balancing" is a perfectly good topic in itself. The jumbo frame myths shouldn't have to be dealt with here.

And BTW, I don't use jumbo frames, and have hit over 90 MB/s actual file transfers and over 115 MB/s TCP/IP throughput with consumer gear.

Frame size directly effects load. But your kinda right, even though it effects load, probably better for a spin off thread.

Consumer gear, define that? 90 or 115MB/s requires stripped raid on each end to be sustained! Asus makes a ~$100 gigabit switch with jumbo frame support and a 100bt uplink.
 
I have a linux file server with three network cards. My home network is small so I keep everything on the same subnet, but the three nics allow me to have a direct line to each major network segment - gigabit, wireless, and 100mbps.

With one NIC (gigabit on a gigabit switch) the fileserver would saturate the link to the main switch (baystack 350) and prevent other machines on the gigabit segment to communicate to the 100tx segment. Now I don't saturate any of the choke points and life is good.
 
longblock454 said:
Your gigabit to the SAN with an MTU of 1500?
We need to use 1500 because we only have 1 GB switch with 15 100MB switches uplinked to it --> fragmentation = bad
 
Back
Top