ESXi 4.1U1: Passthrough for the Intel chipset on X8DT3

TheLastBoyscout

Limp Gawd
Joined
Feb 13, 2011
Messages
142
Hi,

I inherited an X8DT3 based system with 2x 15k, 36GB SAS drives. Now I want to use the system as an all-in-one server (first starting VM [#1, Solaris clone] provides the storage [1] for all other 6+ Win XP/7 VMs). In order to get some decent performance for all the VMs, I plan on running the storage for VM#1 on a RAID10 consisting of 600 GB VelociRaptors (+L2ARC).
Now, in order to utilize the two SAS drives, they need to be connected to the LSI1068e controller and thus I cannot pass this one through to the storage VM. Could I set pass the Intel 10R chipset through to the VM #1 instead and connect the Raptors to the Intel 10R?
I was planning on booting ESXi 4.1U1 from a USB stick, but if the USB ports are part of the Intel 10R, this would not allow me to pass the Intel 10R to VM #1, right? I could still install ESXi on the 2x SAS drive mirror along with the VM#1.

Ideally, I'd pass the LSI1068e through, but then I couldn't use the two SAS drives and that seems like a waste....

Am I off base here? Is this feasible?

TLB
 
Try VMFS performance before you decide you have to pass through. ZFS and the like will work just fine on a VMDK, and unless you somehow know otherwise, I haven't seen anything that says performance will suffer significantly on VMFS (if at all).
 
I have a similar Supermicro board. Neither of the RAID controllers are supported natively in ESXi. They will both show up as just a non-raid controller. Looking at my ESXi pass-through menu it has the Intel controller with no dependencies so you should have no problem passing it through as the USB controllers are all separate.
 
For best performance and ZFS pool and error control, you should pass-through
your 1068e controller to your Solaris based NAS and share the pool via NFS to ESXi.

I would not boot ESXi from usb but from disks (much faster and better reliability,
but raptors are not needed, i use cheap 24/7 160 GB 2,5" drives from WD, use Raptors for backup)
and use the remaining space as a local datastor for VM Nr1 = Solaris NAS/SAN.

If you need Raid-1 for your ESXI and/or Solaris, you may use a driver independant hardware raid-1
enclosure like a Raidsonic SR2760-2S-S2B or similar.

you may also pass-through sata, but i would boot ESXi from.
see my config at http://www.napp-it.org/doc/downloads/all-in-one.pdf

Gea
 
Last edited:
For best performance and ZFS pool and error control, you should pass-through
your 1068e controller to your Solaris based NAS and share the pool via NFS to ESXi.

I would not boot ESXi from usb but from disks (much faster and better reliability,
but raptors are not needed, i use cheap 24/7 160 GB 2,5" drives from WD, use Raptors for backup)
and use the remaining space as a local datastor for VM Nr1 = Solaris NAS/SAN.

If you need Raid-1 for your ESXI and/or Solaris, you may use a driver independant hardware raid-1
enclosure like a Raidsonic SR2760-2S-S2B or similar.

you may also pass-through sata, but i would boot ESXi from.
see my config at http://www.napp-it.org/doc/downloads/all-in-one.pdf

Gea

I'm still waiting for proof on the performance claims. Anyone have anything to back that up? I'm really curious what people are seeing, since there shouldn't be a significant difference at all.

Speed of boot media doesn't matter for ESXi, unless you're logging to local storage, or running vms on local (same media as boot). The bootbanks decompress to memory, so there is very little actual access going on to the boot media once you're up.
 
Speed of boot media doesn't matter for ESXi, unless you're logging to local storage, or running vms on local (same media as boot). The bootbanks decompress to memory, so there is very little actual access going on to the boot media once you're up.



I agree on this one. I have had my ESX servers running off 2GB USB keys for a couple of years already. Unless you boot them regularly the speed should not be an issue. Since I will need a local disk for my NAS VM, I might install to the same local disks this time just because they are there. But having several spare USBs laying around for DR if one of the boot ones fails somehow is a very cheap redundancy option.
 
I think there is some confusion here. Based on other posts, I don't think gea is talking about performance vis-a-vis booting, but general I/O performance. I see about 50% performance hit on a native zfs vs creating a large virtual disk on each of the N drives and assigning them to the VM. I've read quite a few articles claiming that pci passthrough only suffers a small penalty.
 
I agree on this one. I have had my ESX servers running off 2GB USB keys for a couple of years already. Unless you boot them regularly the speed should not be an issue. Since I will need a local disk for my NAS VM, I might install to the same local disks this time just because they are there. But having several spare USBs laying around for DR if one of the boot ones fails somehow is a very cheap redundancy option.

Be careful - if you have multiple boot partitions presented with ESXi installed, then you'll get a soft panic and it won't boot. The FAT16 signature embedded in the partition is statically generated by your install media - it will always make the same signature, so ESX can't tell one boot device from another.

I think there is some confusion here. Based on other posts, I don't think gea is talking about performance vis-a-vis booting, but general I/O performance. I see about 50% performance hit on a native zfs vs creating a large virtual disk on each of the N drives and assigning them to the VM. I've read quite a few articles claiming that pci passthrough only suffers a small penalty.

Ok, what are you doing for the testing that makes you think there's a 50% performance hit from something else?

Not doubting, just reproducing, because if that's true, then we've got something we need to fix. :)
 
take, say, a 5-disk raidz. i boot from live cd of, say, zfsguru, and do a disk benchmark. simple naive one, say, writing a 16GB file on a zfs folder. boot from esxi, create a datastore on each of the 5 physical disks, then create a single large file using almost all of each datastore, add each of those 5 disks to the freebsd VM i then create. boot that vm from the zfsguru live cd. create a pool on the 5 "disks" it sees. then do the exact same 16GB (or whatever) disk write. it's about half as fast. it isn't a matter of my "thinking" there is a speed slowdown - i can clearly see and measure it. are you saying this kind of setup should have a negligible slowdown compared to a physical test on the real HW? if so, what are you saying it should be? and based on what data?
 
take, say, a 5-disk raidz. i boot from live cd of, say, zfsguru, and do a disk benchmark. simple naive one, say, writing a 16GB file on a zfs folder. boot from esxi, create a datastore on each of the 5 physical disks, then create a single large file using almost all of each datastore, add each of those 5 disks to the freebsd VM i then create. boot that vm from the zfsguru live cd. create a pool on the 5 "disks" it sees. then do the exact same 16GB (or whatever) disk write. it's about half as fast. it isn't a matter of my "thinking" there is a speed slowdown - i can clearly see and measure it. are you saying this kind of setup should have a negligible slowdown compared to a physical test on the real HW? if so, what are you saying it should be? and based on what data?

Ok, give me a few to dissect this and test. If I'm following you right, you're not actually booting the freeBSD vm to BSD, but just to boot off of the live-ISO to run the tests, correct? LSI Parallel virtual card?

What's the physical hardware (controller card) you're running? (nm, see it above)

I'll run some tests on my hardware to see what I get.
 
Last edited:
well, i've seen this running the zfsguru livecd but also installed to a virtual HD. the sata controller is an nvidia chipset (amd opteron mobo). possibly esxi isn't doing well with the nvidia chipset? yes, the virtual controller is lsi parallel...
 
well, i've seen this running the zfsguru livecd but also installed to a virtual HD. the sata controller is an nvidia chipset (amd opteron mobo). possibly esxi isn't doing well with the nvidia chipset? yes, the virtual controller is lsi parallel...

Potentially. I'll check on that. I'm going to recreate as best I can - I've got multiple storage devices, but installing natively is a bit harder, so I may have to dig to find a physical system I can hook up to the storage fabrics (and then remember how to install), or local disk. It may be a local storage issue as well, so I'll try to test both that and SAN storage.

Let me see what I can find. :)
 
Ok, first set of results coming in (testing against a clariion volume just to compare ZFS pools against a known quantity) and I'm seeing some odd results. I'm starting to think that this may be a driver issue / zfs issue with FreeBSD. I'm getting significantly less performance out of this pool than I do out of a linux LVM device on the same volume.

Gonna do more testing.
 
VM.

I know what performance I should get off of a certain volume. In running tests against that volume with zfsguru, I'm seeing ~seriously~ different performance numbers. So I'm busy installing FreeBSD so I can tweak the drivers more than I could on the livecd, to see if maybe it's a configuration issue :)

Basically, I'm trying to set a baseline against known values, and I'm already seeing a problem getting there. At first glance, it would appear that ZFS pools are running at 1/2 the speed (or worse) of a Linux LVM w. ext3. But, I'm not sold on that, so I'm testing :)
 
Wow, so I wasn't hallucinating :) Since you are testing zfsguru/freebsd, I'm going to try openindiana on physical and VM and post the results.
 
ok, getting better numbers with installed FreeBSD with vmware tools on a zpool so far (raidz). Will start some comparisons against local storage here shortly.
 
Be careful - if you have multiple boot partitions presented with ESXi installed, then you'll get a soft panic and it won't boot. The FAT16 signature embedded in the partition is statically generated by your install media - it will always make the same signature, so ESX can't tell one boot device from another.

I'm not sure I am following you, or I may not have been clear. I meant, I might load ESXi to the disk, then use the rest of the disk as a VMFS and install my NAS to boot from a VMDK on that VMFS volume.

I am very interested in seeing some of the results of the tests you are running. I hope to have my NAS gear within 2 weeks, and unless performance is almost on par I think I would rather run the NAS natively. It would be great to make use of the server for other small VMs by doing all-in-one, but since I am building it to support my other 2 ESX servers, I need it to perform as well as I can manage. I am also still on the fence about which OS to run for the NAS, so I will be looking at those metrics too as I get closer to loading an OS.
 
Back
Top