Colo network design

ltickett

[H]ard|Gawd
Joined
Jul 27, 2000
Messages
1,125
I've just broken the bank on hardware to start running some hosted services in a colo and need a few pointers on the network design.

I feel relatively confident but have never actually needed to put VLANs into practice before.

Here's roughly what I have in mind...

A 100Mbit WAN connection will come in to a EdgeRouter Lite and then feed a D-Link 1210-24 (24port 1Gbe) which in turn will feed a Netgear Prosafe XS708E (8port 10Gbe)

-a SAN VLAN / subnet with a single 10Gbe connection to the NAS and both ESX servers (if it's possible i will use one of the 1Gbe ports for failover)
-a Data VLAN / subnet with a single 10Gbe connection to the NAS and both ESX servers (if it's possible i will use one of the 1Gbe ports for failover)
-a Management VLAN / subnet with a single 1Gbe connection to each ESX server
-a IPMI VLAN / subnet with a single 1Gbe connection to each ESX server

Does that sound about right?
 
It's not a SAN if it's connected to the same equipment, it's just a storage VLAN in that case.

In that case why limit yourself for no reason? Bundle the two links and pass all traffic over the two links (vlan separated) for failover.

The rest sounds fine.
 
Do you think using VLANs is an unnecessary complication and I should just stick to using different subnets? Or is there some benefit to using VLANS?

I like your idea about "bundling the two links" as it provides better failover. Could you recommend the best way to do this? I know there are different ways, but I am never sure how each of the methods work and whether any of them are intelligent enough to work will in this scenario (Failover will work fine, but i'm not sure if some/all of the methods will correctly determine that one port is busier than the other and use it accordingly...
 
The XS708E is an unmanaged switch right? It might be difficult to impossible to setup a useful NIC Team with ESXi on an unmanaged switch. I'm not even sure you can configure VLANs on the XS708E? The features page says 802.1Q but if it's truly unmanaged I don't know how you set that up...

Research :)
 
The XS708E is an unmanaged switch right? It might be difficult to impossible to setup a useful NIC Team with ESXi on an unmanaged switch. I'm not even sure you can configure VLANs on the XS708E? The features page says 802.1Q but if it's truly unmanaged I don't know how you set that up...

Research :)

You are correct. It is an un-managed switch. So no VLANs for you.


You need to invest in better feature switches.
 
The XS708E is an unmanaged switch right? It might be difficult to impossible to setup a useful NIC Team with ESXi on an unmanaged switch. I'm not even sure you can configure VLANs on the XS708E? The features page says 802.1Q but if it's truly unmanaged I don't know how you set that up...

Research :)

nic teaming in vmware or hyperv in switch-independent mode does not need any configuration on the switch side
 
http://www.decryptedtech.com/storag...fe-xs708e-8-port-10gbe-switch-review/Page-4-1

Looks like it can be managed?

But maybe I will keep things simple and just use different subnets.

And similarly, if it doesn't support some form of Link Aggregation, maybe i'll just use 2 connections for failover only. 10Gbe should be enough for "all of the traffic".

Budget wouldn't have stretched to a more feature rich switch I don't think.
 
Oh yes, and i intended to use NFS after running some benchmarks to confirm that it is faster than iSCSI (as I did previously for 1Gbe; http://tickett.wordpress.com/2014/04/25/esx-iscsi-vs-nfs-vs-local-single-ssd-vs-raid0-ssd/)

There wasn't a lot in it in most cases but the sequential writes;

screen-shot-2014-04-25-at-16-26-58.png
 
I don't understand your last graph. is that the expected IO performance for your setup? That's somehow acceptable?
 
No. That's actual performance (MB/s) on a 1Gbe network (shown alongside local disk). I will attempt the same benchmark on the 10Gbe network once I get the remaining gear.
 
Um your raid0 SSD is slower than a single SSD? How the hell?
 
Create an LACP trunk to each device. create a VLAN for storage, a VLAN for VM traffic, and a VLAN for your management (can include IPMI in this same VLAN probably)
 
Back
Top