Generation 2 Virtual Machines on Windows 8.1 and Server 2012 R2 plus other nice new features

DDD North 2013 was a fantastic community conference but sadly I didn’t get chance to deliver my grok talk on Generation 2 virtual machines. A few people came up to me beforehand to say they were interested in the topic, and a few more spoke to me afterwards to ask if I would blog. I had planned to write a post anyway, but when you know it’s something people want to read you get a bit more of a push.

This post will cover two areas of Hyper-V in Windows 8.1 and Server 2012: Generation 2 virtual machines which are completely new and a number of changes that should apply to all VMs, be they gen 1 or gen 2. What I not going to cover, as it’s a post all of it’s own, is the new and improved software-defined-networking in hyper-v.

Generation Next

As you can see in the screenshot below, when creating a virtual machine in the Windows 8.1 and Server 2012 you are asked which generation of VM you want. The screen gives a brief and reasonable summary of what the differences are… to a point.

image

Generation 1 virtual machines are a mix of synthetic and emulated hardware. This goes all the way back to previous virtualisation solutions where the virtual machine was usually a software emulation of the good old faithful Intel 440BX motherboard.

  • The emulated hardware delivered a high level of compatibility across a range of operating systems. Old versions of DOS, Windows NT, Netware etc would all fairly happily boot and run on the 440BX hardware. You didn’t get all the cleverness of a guest that knew it was inside a VM but it worked.
  • PXE (network) boot was not possible on the implementation of the synthetic network adapter in Hyper-V. That meant that you had to use the emulated NIC if you wanted to do this.
  • Virtual hard disks could be added to the virtual SCSI adapter whilst the machine was running, but not the IDE adapter. You couldn’t boot from a SCSI device, however, so many machines had to have drives on both devices.
  • Emulated keyboard controllers and other system devices were also implemented for compatibility.

Generation 2 virtual machines get rid of all that legacy, emulated hardware. From what I’ve read and heard, all the devices in a generation 2 VM are synthetic, software generated. This makes the VM leaner and more efficient in how it uses resources, and potentially faster as gen 2 VMs are much closer to the kind of hardware found in a modern PC.

There are three key changes in Gen as far as most users are concerned:

  • SCSI disks are not bootable. There is no IDE channel at all; all drives (VHD or virtual optical drive) are now on the SCSI channel. This is far simpler than before.
  • Synthetic network adapters support PXE boot. Gone is the old legacy network adapter.
  • The system uses UEFI rather than BIOS. That means you can implement secure boot on a VM. Whilst this might sound unnecessary it could be of great interest to organisations where security is key.

The drawback of gen 2 is that, right now, only Windows 8, Server 2012 and their respective new updated versions can be run as a guest in a gen 2 VM. I’m not sure that this will change in terms of Microsoft operating systems, but I do expect a number of Linux systems to be able to join the club eventually. I have done a good deal of experimentation here, with a large range of Linux distributions. Pretty much across the board I could get the installation media to boot but install failed because the hardware was unknown. What this means is that when Microsoft release new versions of the hyper-v kernel additions for Linux we should see support expand in this regard.

The screenshot below shows the new hardware configuration screen for a generation 2 virtual machine. Note the much shorted list of devices in the left hand column:

image

Useful changes across generations

There have been some other changes that, in theory, span generations. More on that in a bit.

Drives

When Server 2012/Windows 8 arrived, Microsoft added bandwidth management for VMs. That useful for IT pros who want to manage what resources servers can consume but it’s also jolly handy for developers who would like to try low bandwidth connections during testing. We can’t do anything about latency with this approach, but it’s nice to be able to dial a connection down to 1Mb to see what the impact is.

Server 2012 R2/Windows 8.1 add a similar option for the virtual hard drive. We can now specify QoS for the virtual hard disks, in IoPs. The system allows you to set a minimum and maximum. It’s important to remember here that this does depend on the physical tin beneath your VM. I run two SSDs in my laptops now, but before that my VMs ran on a 5400rpm drive. Trying to set a high value for minimum IoPs wouldn’t get me very far here. What is more useful, however, is being able to set the maximum value so we can start to simulate slow drives for testing.

As with network bandwidth management, I think this is also a great feature for IT pros who need to manage contention between VMs and focus resource on key machines.

The screenshot below shows the disk options screen with QoS and more.

image

Also new is the ability to resize a VHD that is attached to a running machine. This is only possible with disks attached to SCSI channels, so gen 2 VMs may get more benefit here. Additionally, VHDs can now be shared between VMS. Again, this is SCSI only but this is a really useful change because it means we can build clusters with shared storage hosted on VHDs rather than direct attached iSCSI or fibrechannel. The end result is to make more options available to the little guys who don’t have the resources for expensive tin. It’s also great for building test environments that need to mirror those of a customer – we do that all the time and it’s going to give us lots of options.

Networks

I already said that I’m not going to dive into the new software-defined-networking here. If terms like NVGRE get you excited then there are people with more knowledge of comms than I have writing on the subject. Suffice to say it looks really useful for IT pros but not really for developers, I don’t think.

Also not much use for developers but incredibly useful for developers is the new Protected Network functionality. The concept of this is really simple and so, so useful:

Imagine you have a two node cluster. Each node has a network connection for VMs, not shared by the host OS, and one for the OS itself that the cluster uses. Node 1 suddenly loses connectivity on the VM connection. What happens? Absolutely nothing with Server 2012 because the VMs are still running and nothing knows that the VMs no longer have connectivity. With Server 2012 R2/Windows 8.1 you can enable protect network for the virtual adapter. Now, the systems are checking connectivity to the VM and in our scenario all the VMs on node 1 will fail merrily over to node 2, which still has a connection.

I know we will find this new feature useful on our clustered, production VM hosts. Again, this really helps smaller organisations get better resilience from simpler hardware solutions.

The screenshot below shows the advanced options for a network adapter with network protection enabled.

image

Enhanced session mode

I said that, in theory, many of the new changes are pan-generation (and pan-guest OS). According to the documentation, enhanced session mode should work on more than just Windows 8.1 or Server 2012 FR2 guest operating systems. In practice, I have not found this to be the case, even after updating the VM additions on my machines to the latest version.

It is useful, however. When you enable enhanced session mode then, providing you have enabled remote desktop on the guest, this will be used to connect to the VM. Even if the guest has no network connection to the host OS, or even a network adapter!).

The screenshot below shows the option for enhanced session mode. This is enabled by default in Windows 8.1 and disabled by default in Server 2012 R2.

image

When you have the option enabled you will see a new button on the right of the toolbar, as shown in the image below.

image

That little PC with a plus symbol toggles the VM connection between old-style and the new, RDP-based connection. The end result is that you get more screen resolution choices, you can copy and paste properly between your host and the VM (no more paste keystrokes and you can copy files and documents!) and all the USB device pass-through from the host works too.

For developers working inside a VM this is is great – no more needing network connections to be able to RDP into a box. That means that you can run sensitive VMs, or multiple copies of a VM on multiple machines much more easily than before. If you enable the new connection mode on a VM, and restart it, when the VM begins to boot it connects in the old way, but as soon as it detects the RDP service on the guest you get a dialog asking you for the new resolution and it swtiches to the RDP style connection. It’s great.

I’m hoping that there will either be updates for older Microsoft OS versions, or updated VM additions that will give a consistent result that I have no so far experienced. In theory, updates to the Linux kernel additions could also add this new connection type, but again, so far my experience is that it doesn’t work right now.

Summary

To sum up then:

  • Generation 2 VMs – leaner, meaner and simpler all round but limited to the latest Microsoft desktop and server OS’. I can’t see a reason not to use them for the latest OS version.
  • Disk QoS – should be really useful for dev/test when you need to simulate a slow drive. Great for IT pros to manage environments with a mix of critical and non-critical VMs.
  • Online VHD resizing. There are so many times I’ve needed this on dev/test in the last few months alone. Shame it’s SCSI only so you can’t grow the OS disk on a gen 1 VM but you can’t have everything.
  • Shared VHD. Another useful new option that will help building dev/test environments and will also be useful for smaller organisations who want to build things like virtualised clustered file servers using a cluster shared volume (CSV).
  • Network protection. Great for IT pros running host clusters. Can’t see a use for devs.
  • Enhanced session mode. Useful all round, especially for devs who want to easily work on a VM. Useful for IT pros who need to copy stuff on to running VMs, but so far my experience is mixed as it only works on Windows 8.1 and Server 2012 guests.

Windows 8.1 is already on MSDN and TechNet so if you’re a dev or IT Pro with the right subscriptions, why aren’t you trying this stuff already? For everybody else, the 18th of this month sees general availability and I expect evaluation media will be available for you to play with.

IT Camp Leeds Roundup

WP_000140 

Yesterday was great fun and I was really pleased to see so many Black Marble event regulars at the IT Camp. It was great to hear so many requests for more events like it in Leeds. We’re all keen to run more, but we need people to attend and give us feedback in order to be able to do that.

I hope those of you who were there took away useful knowledge from the event. Andy and Simon were very keen that it should not be a day of PowerPoint and canned demos and we certainly delivered that. Did we have technical issues that meant we had to change plans on the fly? Sure! Certainly nobody we spoke to seemed to mind. All of us from Black Marble thought the concept for the day – one of interaction, audience participation and trying to build systems on the fly – should be fun and we thought it was.

I believe that the TechNet UK folks were tweeting and posting links to some of the things we talked about yesterday but I thought it would do no harm to round some of them up here.

  • When we were talking about configuring remote management of hyper-v servers I mentioned HVRemote. This is a script written by John Howard that has been really useful for Andy and myself in the past. John’s blog has lots of really useful information about hyper-v and management, although he’s not posted for a while now.
  • Also a great source of information on hyper-v and virtualisation is Ben Armstrong (VPC-Guy).
  • The Virtualisation Team Blog is a good place for product info, announcements and knowledge.
  • Richard Fennell posts regularly on Lab Manager, which builds on Hyper-V and SCVMM to deliver great things for your dev and test teams. I thought either he or Andy had blogged about how we got Lab Manager 2010 working with a hyper-v cluster but it appears not. We’ll see if we can get something written up in that space.
  • Core Configurator  (currently at version 2.0) was something that was shown as a handy tool to control some of the settings on your Hyper-V server.
  • The Microsoft iSCSI Target is a free download. The Virtualisation Team blogged on it’s release.
  • For those of you who played with the Surface, we have some videos on YouTube of the Retail, Concierge and O7 game that were filmed at NRF 2012.

I’m really looking forward to other camps. Andy and Simon want to keep the hands-on approach so you can look forward to playing with an installed SCVMM solution in the follow-up virtualisation camp, and the consumerisation of IT camp should be wild as we try to cover how IT pros can deal with the variety of devices that our staff (and our bosses) want to use!

As always, the page of details about the events is here!

Server Core, Hyper-V and VLANs: An Odyssey

A sensible plan

This is a torrid tale of frustration and annoyance, tempered by the fun of digging through system commands and registry entries to try and get things working.

We’ve been restructuring our network at Black Marble. The old single subnet was creaking and we were short of addresses so we decided to subnet with network subnets for physical, virtual internal and virtual development servers, desktops, wifi etc. We don’t have a huge amount of network equipment, and we needed to put virtual servers hosted on hyper-v on separate networks so we decided to use VLANs.

Our new infrastructure has one clever switch that can generate all the VLANs we need, link those VLANs to IP subnets and provide all the routing between them. By doing it this way we can present any subnet to any port on any switch with careful configuration and use of the 802.1Q VLAN standard. Hyper-V servers can have a single physical interface with traffic from multiple VLANs flowing across it to the virtual switch, with individual VMs assigned to specific VLANs.

We did the heavy lifting of the network move without touching our Hyper-V cluster, placing all the NICs of all the servers on the VLAN corresponding to our old IP subnet. We then tested VLANs over the virtual switch in Hyper-V using a separate server and made sure we knew how to configure the switch and Hyper-V to make it all work.

Then we came to the cluster. Running Windows 2008 R2 Server Core.

Since we built the cluster Andy and I have come to decide that if we ever rebuild it, server core will not be used. It’s just too darn hard to configure when you really need to, and this is one of those times.

A tricky situation

Before we began to muck around with the VLAN settings, we needed to change the default gateway that the servers used. The old default gateway was the address of our ISA (now a shiny TMG) server. That box is still there, but now we have the router at the heart of the network, whose address is the new default gateway.

To change the default gateway on server core we need a command line tool. Enter Netsh, stage left.

We first need to list the interfaces so we know what we’re doing. IPConfig will list the interfaces and their IP settings. Old lags will no doubt abbreviate the netsh commands that we need next but I’ll write them out in full so they make sense.

Give me a list of the physical network adapters and their connection status: netsh interface show interface

Show me the IPV4 interfaces: netsh interface ipv4 show interface

To change the default gateway we must issue a set command with all the IP settings – just entering the gateway will not work as all the current settings get wiped first:
netsh interface ipv4 set address name="<name>" source=static address=x.x.x.x mask=255.255.255.0 gateway=x.x.x.x
Where <name> is the name shown in the IPV4 interface list, which will match the one shown in the ipconfig output that you want to change the gateway for. We’re using a class C subnet structure – your network mask may vary.

It’s worth pointing out that we stopped the cluster service on the server whilst we did this (changing servers one by one so we kept our services runnning).

We had two interfaces to change. Once corresponded to the NIC used to manage the server and the other corresponded to the one used by the virtual switch for Hyper-V. That accounted for two of the four NICs on our Sun X2200-M2’s, with the SAN iSCSI network taking a third. The SAN used a Broadcom, the spare was the other Broadcom and the others used each of the two nVidia NICs on the Sun (that will become important shortly).

A sudden problem

Having sorted the IP networking our next step was to sort out the VLAN configuration. To do that we changed the switch port that the NIC hosting the hyper-V virtual switch was connected to from being an untagged member of only our server subnet VLAN to being a tagged member of that VLAN and a tagged member of the new VLAN corresponding to our subnet for virtual internal servers.

The next step was to set the VLAN id for a test VM (we could ignore the host as it doesn’t share the virtual switch – it has it’s own dedicated NIC).

The snag was, the checkbox to enable VLAN ids was disabled when we looking in Hyper-V manager, both for the virtual switch and for the NIC in the VM.

Some investigation and checking of our test server showed that the physical network driver had a setting, Priority and VLAN, that needed to be set to enable priority and VLAN tagging of traffic, and that the default state was priority only. On full server that’s a checkbox in the driver setting. On server core…?

So, first of all we tried to find the device itself. Sadly, the server decided that remove management of devices from another server wasn’t going to be allowed, despite reporting that it should be. So we searched for command line tools.

To query the machine so it lists visible hardware devices: sc query type= driver (note the space before ‘driver’)

That will give you a list, allowing you to find the network device name.

For our server, that came back with nvenetfd – the nVidia NIC.

To use that name and find the file responsible: sc qc <device name> (in our case nvenetfd )

That returned the nvm62x64.sys driver file. Nothing about settings, but it allowed us to check the driver versions. Hold that thought, I’ll come back to it shortly.

Meanwhile

We’d also been poking at the test server, looking at the NIC settings. Logic suggested that the settings should be in the registry – all we have to do was find them.

I’ll save you the hunt:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}

That key holds all the Network Adapters. There are keys beneath that are numbered (0000, 0001, etc). The contents of those keys enabled us to figure out which key matched which adapter. Looking at the test server and comparing it to the server- core hyper-v box we found a string value call *PriorityVlanTag which had a value of 3 on the test server (priority and vlan enabled) and 1 on the hyper-v box. We set the hyper-v box to 3. Nothing. No change. We rebooted. Still nothing.

Then we noticed that in the key for the NIC there was a subkey: \Ndi\params. In there was a key called *PriorityVlanTag. In that key were settings listing the options that were displayed in the GUI settings dialog, along with the value that got set. For the nVidia the value was 2, not 3. We duly changed the value and tried again. Nothing.

So we decided to update the drivers. This brings us back to where I left us earlier with the sc command.

To update a driver on server core, you need to unpack the driver files into a folder and then run the following:
pnputil –i –a <explicit path to driver inf file>

After failing to get any other drivers to install it looked like we had the latest version and the system was not letting go. So we did some more research on the internet (what did we ever do before the internet?).

It transpires, for those of you with Sun servers, that the nVidia cards appear not to support VLAN ids on traffic, despite having all the settings to suggest that they do.

Darn.

A way forward

Fortunately we have a spare broadcom on each of our hyper-v host servers. We are now switching the virtual switch binding from the nVidia to the broadcom on each of our servers. We didn’t even have to hack around with registry settings once we did that the VLAN id settings in Hyper-V simply sprang into life.

The moral of this story is that if you want to use VLAN id’s with Hyper-V and your server has nVidia network adapters (and certainly if it’s a Sun X2200-M2) then stop now before you lose your hair. You need to use another NIC, if you have one, or install one if you don’t. Hopefully, however, the command line tools and registry keys above will help other travellers who find themselves in a similar situation to ourselves.

Creating a new Virtual PC using the Virtual Windows XP Base Disk

One of the most useful elements of the Virtual Windows XP feature in Windows 7 is that the VPC is easily replicated and you can have multiple virtual machines all publishing applications which run in their own sandboxes.

  1. Create a new Virtual Machine
  2. Create a Differencing Hard Disk from the Virtual Windows XP Base
  3. Start the VM and run through the setup wizard:
    1. Accept the Licence Agreement
      image
    2. Set the keyboard and locale to your needs
      image 
    3. Give the PC a name and administrator password
      image
    4. Set the time zone
      image
    5. Wait while it configures networking…
      image
    6. … and runs through the final steps, followed by a reboot.
      image
  4. Configure the VPC for updates and user accounts:
    1. On restart, choose an option for automatic updates
      image
    2. You should now be logged in as administrator
      image 
    3. Open up Computer Management and enable the ‘User’ account, then reset the account password to something you know.
      image
      image
    4. Enable Integration Features from the VPC Tools Menu
      image
    5. Set the login account to the user account you just enabled.
    6. Accept the logon message to disconnect Administrator
      image
  5. Configure the applications on the VPC:
    1. Once you’re logged on as User, create a new shortcut in c:\documents and settings\all users\start menu and wait a few minutes.
      image
      You should see your start menu update with the new application shortcut
      image
      Each virtual machine gets a folder in your start menu beneath Windows Virtual PC and the applications on each PC appear in there.
    2. Once you’ve finished configuring your applications, log off your session on the virtual PC (don’t close the PC or shut it down)
      image
    3. Then close the VPC down from the Action menu and choose Hibernate
      image

If you now start any of the applications that have appeared in your main computer’s Start menu, the VPC will fire up in the background and you application will appear on your desktop. This is a great way to create multiple VPCs with applications that might conflict with each other.

There is a catch, however. Windows Virtual PC requires hardware virtualisation support to work. In my opinion this is a mistake. Since the virtual machines use emulated hardware rather than accessing the machine hardware like Hyper-V VMs do, I can’t see the reasoning here. Virtual PC 2007 used the hardware virtualisation if it was available but didn’t force it on you, which was the correct approach. Lots of businesses will find this technology useful, but will discover that the majority of their computers won’t be able to use it. At that point, the solution may as well not exist, and I for one hope that Microsoft change their mind about hardware virtualisation support before Windows Virtual PC ships.

Tech Ed EMEA IT: Day 3 – Microsoft Enterprise Desktop Virtualisation (MED-V)

OK, MED-V is cool! Sadly, cool though it is, it’s not something we’ll use3 at BM, but in my previous lives doing large organisation IT, MED-V would have been a killer.

In a nutshell, it is this: create a Virtual PC image with your legacy OS and legacy App. Deploy that VPC to your users desktop so they can run your legacy app but let them run the app without needing to start the VPC and use two desktops.

That’s right – MED-V apps appear in the host OS Start Menu and fire up windows which, although using the appearance of the guest OS, are hosted straight on the desktop. Not only that, but they get task bar entries, and even tray icons!

It’s really well thought out – admins create the VPCs, publish them into a server infrastructure and publish the images and apps to users. The system takes care of versioning for the images and pushes them out to users which reduces the amount of data transferred.

You can allow roaming users to work remotely as well, but do clever things like setting a time limit, after which the virtual apps won’t work because the user needs to connect to the main system to get updates to the guest OS.

It’s great. It’s also not out yet. Beta 1 is expected Q1 2009, although they are looking for early access users. Release is projected for H1 2009. If you’re a big organisation and migration to Vista is a pain, MED-V may be for you, although it’s only available to SA customers, as far as I can tell.

The snags (there are always some, right?): Host OS is Vista Sp1 or XP SP2/3 32-bit only. Guest OS is Windows XP or Windows 2000 only.

It was a great session, and you definitely want to find out more about this.

Netware 6.5 on Hyper-V

As part of a customer project I needed to create a Netware environment for testing. It’s been a little while since I did any netware management and I quite enjoyed it. I did, however, encounter a couple of gotchas which I thought I’d write up for the greater good.

Netware OS

Installing the Netware OS was actually pretty straightforward. There are no integration services offered for Netware so from the outset I knew that I would need to use legacy hardware options in the virtual machine.

I created a nice big dynamic virtual hard disk for the server because I will need to install GroupWise and a whole bunch of other services later. I attached this to the virtual IDE controller, gave the machine a single processor core as anything more needs integration services, and (critically!) added a legacy network adapter. Netware isn’t a huge memory hog, so I added 1Gb of RAM and off we went.

I hit a snag at the point where the server tried to identify network drivers – it couldn’t find any, and I couldn’t see any in the list to load manually which matched the emulated hardware.

The solution turned out to be really simple: as you step through the installation screens there is an option to allow unsupported drivers. By default that is set to no. If you change it to yes, the installation recognises the network adapter as an old DEC and loads a driver which works.

Apart from that, I have experienced no difficulties with the server whatsoever, other than I have to run the machine connection window full screen to be able to switch between console screens.

Windows Client

I will admit, this one drove me crazy for a while before my final epiphany. There is a Novell client for Windows Vista now available, but why build a Vista VPC when an XP one would need less horsepower?

I dutifully grabbed an old Virtual PC VHD of our XP base install and fired it up.

Problem number one: In order to install the integration services I need to first uninstall Virtual Server Additions. No sweat, thinks I, clicking the uninstall button. Nope – you get a nice message saying that setup can only run inside a virtual machine!
Slightly surreal, I must say. I had to fire up the machine under Virtual PC and remove the additions, then copy the VHD back on the hyper-v server and start the system so I could install the integration services.

Problem number two: Once I’d installed the Novell client I couldn’t get it to see the Netware server. Nothing I did would work – I strapped down every setting I could on the client to point it at the Netware machine but it refused to connect, although I could ping between the two quite happily.
The solution, when I finally figured it out (and I must admit it was pure chance that I thought to try it) was to remove the shiny new virtual network adapter and replace it with a legacy adapter. As soon as I did that, the Novell client could communicate quite happily with the server!

The situation would appear to be that the Novell client stack can’t communicate properly through the new virtualised driver provided by Hyper-V. Exactly why this should be, I have no idea, but it drove me wild for a good couple of hours today.