Running nested ESXi on HP ML115 G5 – EVC Mode Gotcha

So for a couple of days I have been re-building my lab. This has been overdue for a longtime and I finally had some time to get it done.  The main purpose of re-building my lab is to make sure it follows the base reference architecture for vCloud as documented in the VMware vCloud Architecture Toolkit.  Now you may notice the list of kit that I have in my lab below and wonder how I could do this with only two physical hosts.  The simple answer is nested ESXi.

The hardware in my lab consists of

  • 1 HP ML115 G5 (AMD Opteron 1354 and 8GB RAM)
  • 1 HP Microserver (AMD Athlon II Neo N36L and 8GB RAM)

William Lam has wrote an excellent article on how to run nested ESXi within vSphere 5.0.  This explains how to enable the hidden Guest OS settings.  To read this article click here.

So I began by re-building my physical hosts with ESXi 5.0.  They had previously been running ESXi 4.1 Update 2.

Simon Seagrave wrote a nice article on how to use a HP ML115 G5 and a HP Microserver together in a HA/DRS cluster using EVC Mode.  To read this article click here.

Simon says (:-)) that by configuring EVC Mode for Opteron Generation 1

you can use both the HP ML115 G5 and Microserver together in a HA/DRS Cluster.  This works brilliantly and you can vMotion VMs across the hosts, and DRS does its job perfectly.  This is where the problems started when building nested ESXi.

If you create a new VM by following Williams guide (discussed above) you will have a configuration that looks like this:

Everything looks great.  You power on the VM and start to install VMware vSphere 5.0.  During the installation process however you will be prompted with this warning:

Now I knew that the Secure Virtual Machine Mode was enabled in the bios on the HP ML115 G5, and I also knew that was the only setting that I could configure for this.  So what was the problem?  I was not sure, but as the installer gave me the option to continue I pressed Enter.  Everything went well, the host re-booted and I had now had my first nested ESXi host running.  I built a second nested ESXi and then created a cluster in vCenter and added both the hosts.  By this point I thought that the error was not affecting anything, and as this was a lab I could just forget about it.  However I was wrong!  Once I had my cluster fully configured, I created a test 64 bit RHEL 6 VM, however when powering on this VM I was prompted with a longmode error:

You could still power on the VM, but it would only run in 32bit mode.  Now I knew my friend and ex-colleague Simon Gallagher had 64bit VMs running under nested ESXi on a HP ML115 G5 because of his vTARDIS project.  You can read more about this by clicking here.

We spent quite a bit of time comparing configurations and trying various re-builds, various changes in the .VMX file, nothing would work.  We were even at the point where he was going to export one of his vESXi VMs and send over to me for me to try.

I then remembered about the EVC Mode configuration and how this allows the CPU to change CPUID responses.  Going back to Williams article, he talks a little about CPUIDs and hardware version 4 or 7.  I instantly realised what the problem was.

I disabled EVC Mode on the cluster, and powered on the nested ESXi hosts.  Going back to that cluster I was then able to successfully power on a 64bit VM running under my nested vESXi.

Unfortunately due to turning of EVC Mode DRS no longer works with my two different physical hosts, but thats ok I can manually distribute the load, and even without EVC Mode HA still protects those VMs.

 

, , , , , , , ,

No comments yet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Copyright David Hill

Powered by WordPress. Designed by Woo Themes

%d bloggers like this: