BM-Bloggers

The blogs of Black Marble staff

Define Once, Deploy Everywhere (Sort of...)

Using Lability, DSC and ARM to define and deploy multi-VM environments

Configuration as code crops up a lot in conversation these days. We are searching for that DevOps Nirvana of a single definition of our environment that we can deploy anywhere.

The solution adopted at Black Marble by myself and my colleagues is not quite that, but it comes close enough to satisfy our needs. This document details the technologies and techniques we adopted to achieve our goal, which sounds simple, right?

I want to be able to deploy a collection of virtual machines to my own computer using Hyper-V, to Dev/Test Labs in Azure, and to Azure Stack, using the same description of those virtual machines and their configuration.

Defining Our Platforms

Right now, we use Lab Manager (part of Team Foundation Server) at Black Marble to manage multi-VM environments for testing, hosted on a number of servers managed by System Center Virtual Machine Manager. Those labs are composed of virtual machines that can also be deployed to a developer’s workstation.

The issue is that those environments are pre-built – the machines are configured and the environment saved as a whole. They must be patched when a new lab is created from the stored ‘template’ VMs and adding a new machine to the lab is a pain.

Lab Manager itself is now a end-of-life, so we are looking at alternatives (including Azure Stack – see below).

Microsoft Azure

We already use Azure to host virtual machines. However, even with the lower cost Dev/Test subscription type, running lots of machines in the public cloud can get very expensive.

Azure Dev/Test Labs helps to mitigate this cost issue somewhat by providing a governance wrapper. I can create a Lab and apply rules, such as what types of virtual machine can be created, and automatically shut down running VMs at a set time to limit costs.

Within Azure we use Azure Resource Templates, which are JSON declarations of the services we require, to deploy our virtual machines. Once running, we have extensions that can be injected into a VM and used to execute scripts to configure them. With Windows servers, that means using the Desired State Configuration (DSC) extension.

Dev/Test labs allows me to connect to a Git repository of artefacts. Those artefacts could be items I wish to install into a VM, but they can also be ARM templates to deploy complex environments of multiple VMs. Those ARM templates can then apply DSC configuration definitions to the VMs themselves.

Microsoft Azure Stack

Stack is coming soon. Right now, you can download a Technical Preview that runs on a single machine. Stack is aimed at organisations that have stuff they cannot put in the public cloud, for whatever reason, but want a consistent approach to their development that can span private and public cloud. The final form of Stack is expected to be similar to the current Cloud Platform Solution (CPS), which is way out of my budget. However, the POC runs on a server very close in specification and price point to my existing Lab Manager-controlled servers.

Stack aims to deliver parity with its public cloud older brother. That means that I can use the same ARM templates I use in Azure to deploy my IaaS services on Stack. I have the same DSC extension to inject my configuration, too.

What I don’t have right now on Stack (and it’s unclear what the final product will bring, so I won’t speculate) are the base operating system images that are provided by Microsoft in Azure. I can, however, create my own images and upload them to the internal Stack equivalent of the Azure Marketplace.

Hyper-V

On our desktops, laptops, and servers we use Hyper-V, Microsoft’s virtualisation technology. This offers some parity with Azure – it uses the same VHD disk file format, for example. I don’t get the same complex software-defined-networking but I still get virtual switches to which I can connect machines, and they can be private, internal, or external.

Private switches do what they say on the tin: They are a bubble within which my VMs can communicate with each other but not with the outside world. I can, therefore, have multiple identical bubbles all using the same IP address ranges without issue.

External switches are connected directly to a network adapter on the host. That’s really useful if I need to host servers that deliver services to my organisation, as I need to communicate with them directly. This is great on servers, and is useful on developer workstations with physical NICs. On laptops, however, it gets tricky if you’re using a WiFi network. Those were never designed with VMs in mind, and the way Windows connects an external switch to a wireless adapter is, quite frankly, a horrible kludge and I’ve always found it terribly unreliable.

Internal switches create a new virtual NIC on the host so it can communicate directly with VMs on the network. In Windows 10, we can use an internal switch alongside a NetNat, which allows Windows 10 to provide network address translation for the virtual network. This gives us a setup like your home internet – VMs can communicate out but there are no direct inbound connection allowed (yes, I know you can create NAT publishing rules too, but that’s not a topic for here).

One cool thing about a NetNat is that if you carefully define your IP address ranges, a single NetNat can pass traffic into the networks generated by multiple virtual switches. This allows me to have multiple environments that can coexist on separate subnets.

Lability

I’ve saved this until last because it’s sort of the secret sauce in what we’ve been working on. I stumbled on Lability totally by chance, and random internet searching. It’s an open source solution to defining and deploying VMs on Windows using DSC to declare both the configuration of the environment (the VMs and their settings) and the VMs themselves (the guest OS configuration).

Lability was created by a chap called Iain Brighton and he deserves a great deal of credit for what he’s built.

With Lability, I can use the same DSC configurations that I created for my Azure deployments. I can use the same base VHD images that I need for my Azure Stack Deployments. Lability uses a DSC PowerShell file (.ps1), which can include configurations for multiple nodes – each of the VMs in our environment. It then uses a PowerShell Data file (.psd1) to declare the configuration of the VMs themselves (CPU, RAM, virtual switch etc) as well as pass in configuration details to the DSC file.

If you look at the Lability repo on GitHub you will find links to some excellent articles by people who have used Lability and take you through setting up your Lability Host (your computer) and your first environment.

Identifying Differences

Applying DSC

Lability and the Azure DSC extension work in a subtly but importantly different manner. When you create a DSC configuration, you write a PowerShell configuration which imports DSC Resources that will do the actual configuration work and you call those resources with specified values that declare the state of the configuration you want. Within that PowerShell file you can put functions that figure out some of those values.

When you execute the PowerShell configuration, it runs through that script and generates a MOF file. That file is submitted to the DSC engine on the machine that you are configuring and used to pass parameters into the DSC Resources that are going to execute commands to apply your configuration.

When you use the DSC extension in Azure, it installs the necessary DSC resources on the VM and executes the PowerShell file on that machine, generating the MOF which is then applied.

When you use Lability, the PowerShell file is executed on the host machine and outputs the MOF files – you do this manually before executing a Lability command to create a new lab. Lability then takes care of injecting the MOF and the required DSC resources into the virtual machine, where the configuration is applied.

This is a critical difference! If you look at the examples in the Azure Quickstart Repo, all the DSC is written assuming that it is executed on the host, and uses PowerShell functions to do things like finding the network adapter, or the host IP address etc. If you look at the examples used in Lability labs, the data file provides many of those pieces of information. If you run the PowerShell from an Azure QuickStart template you’ll have some crazy failures, because all those functions execute on the host and therefore get totally incorrect information to pass to the configuration code.

Additionally, none of the Azure examples use a data file to provide configuration data. You might think this is because the data file is not supported. However, this is not true – you can pass a data file in using the DSC extension. Lability makes heavy use of that data file to define our environment.

Networking

In Azure, you cannot set a static IP address from within the VM itself. The networking fabric hands the machine its IP address via DHCP. You can set that IP to be static through the Azure fabric, but not through the VM. That might mean that we don’t know the IP address of a machine before we deploy it.

With Lability, we declare the IP address of the VM in the DSC data file. We could use a DHCP server running on the host, and I do just that myself, but it’s more stuff to install and manage, and for our approach to labs right now we’ve stuck to declaring address in the DSC data file.

We also have additional stuff to think about in Azure – public IP addresses, Network Security Groups and possibly User Defined Routing that controls how (and if) we allow inbound traffic from the internet onto our network, what can talk to what and on which ports within our network, and whether we want to push all traffic through appliances for security.

Azure API Versions

When you write an ARM template to define and deploy your services, each of the resources in that template is defined against a versioned API. You specify which API version you are using in the template, and different resource providers have different versions.

Azure Stack dos not have all the same versions of the various APIs that are in Azure. Ironically, whilst I have had to make few changes to existing ARM templates in terms of their content in order to successfully use them on Stack, I’ve had to change almost every API version referenced in them. Having said that, I am finding that the API versions I reference for Stack by and large work unchanged if I throw the template at Azure.

Declaring Specific Goals

We’ve discussed our target platforms and talked about how those differ in terms of our deployment configurations. Let’s talk about what our aims were as we embarked on our project to manage VM labs:

  1. All labs should deploy from greenfield. One of our biggest pain points with our old approach was that our labs were built as a collection of VMs. We couldn’t change the name of the AD domain; changing IP address was complex; adding new VMs was painful; patching a ‘new’ environment could take hours.
    We were very clear that we wanted to create all new labs from base media which we would try to keep current for patches (at least within a few months) and would allow us to create any number of machines and environments.
  2. There should be one configuration for each guest VM, which would be used everywhere. We were very clear that we would create one DSC configuration for each role that we needed (for example, a Domain Controller or an ADFS server) and that configuration would be used whether we were creating a lab on a local machine, in Azure or Azure Stack.
  3. Maintain a distinction between a virtual machine configuration and an environment configuration. We are building a collection of virtual Lego with our VM configurations. Our teams can combine those Lego bricks into environments that may be project specific. There should be a configuration for those environments. We should never alter an existing configuration for a new environment – we should create a new configuration using the existing one as a base (for example, we need additional roles on our DC for some reason).
  4. Take a common approach with Lability and Azure, whilst accepting we have to maintain two sets of resources.
    Our approach to Azure environments is already modular. We have templates for VMs that are combined into environments through Nested Deployments. This would not change. Our VM definitions would encompass a DSC configuration and an ARM template. Our environments would include both a DSC data file and an ARM template.
  5. Manage and automated the creation of base media. We would need a variety of base VHD files, analogous to the existing marketplace images in Azure: Windows Server (numerous versions), SQL Server, SharePoint, etc. Each of these must be created using scripts so they could be periodically rebuilt to achieve our goal of avoiding time consuming patching of new environments. In short, we would need an Image Factory.
  6. Setup and use should be straightforward. We need our developers to be able to install all the tooling and get a new lab up and running quickly. We need easy integration with Azure Dev/Test Labs, etc. This would need some process automation around the build and release of the VM configurations and anything else we would create as part of the project.

Things You Will Need

If you want to build the same Lab solution as we did you’re going to need a few things:

  1. Git Repository. All the code and configurations we create are ultimately stored in a central Git Repo. We are using Visual Studio Team Services, as it’s our chosen source control platform.
    Why Git? Two reasons: First of all, it allows us to easily deploy our solution to a developer workstation by simply cloning the repo. Second, Azure DevTest Labs needs a Git Repo to store Artifacts (our ARM templates) for deployment of environments.
  2. Build/Release automation. When we commit to our shared repo, our Build server executes some PowerShell to create deployment artifacts for Azure. It creates Zip archives from our configurations to be used with the DSC extension. It makes no sense to create these by hand and waste space in our repo. Our Release pipeline then automatically pushes our artifacts to an Azure storage account that can be accessed by our developers as a single, central store for VM configurations.
  3. Private PowerShell Repository. We use ProGet to provide a local Nuget/PowerShell/NPM etc repository. We had this in place before we started this project, but it has proved invaluable. The simple reason is that we want to publish DSC Resources or easy consumption and installation by our team. You be surprised at how many times we’ve hit a bug in a DSC resource which has been fixed in the source code repo but a new version has not yet been published. Maintaining our own repository allows us to publish our own versions of DSC resources (and in some case our own bespoke resources).
  4. A server to host your Image Factory. I’m not going to spend time documenting this part of our solution. Far cleverer people than I have written about this and we followed their guidance. You need somewhere to host your images and run the scripts on a schedule to build new ones. Our builds run overnight and we place images on a Windows fileshare.
  5. An Azure subscription. If you want to use the same configuration for on-prem and cloud, saying that you need and Azure sub seems a little obvious. However, we are using nested deployments. These use resources that must be accessible to the Azure fabric at deploy time, and the easiest way to do that is to use Azure Storage. You’ll also need a subscription to host your DevTest lab if that’s your preferred approach. Note that you could have multiple subscriptions – our devs can use their MSDN Azure Benefit to host environments within their own DevTest lab, whilst the artefact store is on a corporate subscription and the artefact repo is in our VSTS.
  6. A code editor that understands PowerShell, DSC and ARM. I prefer Visual Studio and the Azure SDK, but Visual Studio Code is an equally powerful tool for creating and managing the files we are going to use.

Managing our VMs and Environments

After much thought, we came up with a standard folder structure and approach to our VM and environment configurations and the supporting scripts needed to deploy them.

In our code repo we have a the following folder structure:

\Environments

This folder contains a series of folders, one per environment.

This folder is specified as that containing environment templates when the shared repo is connected to an Azure DevTest Lab

\Environment\MyEnv1

An environment folder contains three files:

\Environment\MyEnv1\MyEnv1.psd1

The psd1 data file must share the same name as the folder. This contains all the configuration settings for all VMs in our environment and is used by Lability and the VM DSC configs

\Environment\MyEnv1\azuredeploy.json

For DevTest labs, the environment template used in Azure must be named azuredeploy.json. This template calls a series of other templates to deploy the virtual network and VMs to Azure

\Environment\MyEnv1\metadata.json

This file is read by DevTest labs and provides a name and description for our environment

\VMs

This folder contains subfolders for each of our component Virtual Machines.

\VMs\MyVM1

A VM folder contains at least two files:

\VMs\MyVM1\MyVM1.ps1

The ps1 configuration file must share the same name as the folder. It contains the DSC PowerShell to apply the configuration to the guest VM

\VMs\MyVM1\MyVM1.json

The json file shares the folder name for consistency. It is called by the azuredeploy.json environment template to create the VM in Azure and Azure Stack

\Modules

The Modules folder contains shared code of various types

\Modules\Scripts

The scripts folder contains PowerShell scripts to install and configure our standard Lability deploy, wrapper the Lability create and remove commands and perform build and release tasks.

\Modules\Template

The template folder holds common ARM templates that create standard elements shared between environments and called by the azuredeploy.json

\Modules\DSC

This folder is used during the build process. All the DSC resources needed in an environment are downloaded to this folder. A script parses the VM DSC configurations called by an environment and creates Zip files to be uploaded into Azure storage that contain the correct DSC resources and DSC PowerShell for an environment

Wrapper Scripts for Lability

Lability is great but is built to work in a certain way. We have three scripts that perform key functions for our deployment.

Install Script

Our installation script performs the following function:

  1. Creates the C:\Virtualisation base folder we use to store VMs and the Lability working files.
  2. Sets the default Hyper-V locations for Virtual Machines and Virtual Hard disks to c:\Virtualisation
  3. Creates a new Internal Virtual Switch (named in accordance to our convention) and sets the IP address on the NIC created on the host to the required one. Our first switch creates a network of 192.168.254.0/24 and the host gets 192.168.254.1 as it’s IP address.
  4. Creates a new NetNat with an internal address prefix of 192.168.224.0/19. This will pass traffic into and out of up to thirty /24 subnets starting at 192.168.224.0/24, up to 192.168.254.0/24. We decided to work from the top down when creating new networks.
  5. Makes sure that the Nuget package provider is installed and registers our ProGet server as a new PowerShell repository. We then remove the default PowerShellGallery registration and make sure our repo is trusted.
  6. Check to see if Lability is installed and if not, we install it using Install-Module.
  7. Set the following Lability defaults using the Set-LabHostDefault command:
    ConfigurationPath: c:\Virtualisation\Configuration
    IsoPath: c:\Virtualisation\ISOs
    ParentVhdPath: c:\Virtualisation\MasterVirtualHardDisks
    DifferencingVhdPath: c:\Virtualisation\VMVirtualHardDisks
    ModuleCachePath: c:\Virtualisation\Modules
    ResourcePath: c:\Virtualisation\Resources
    HotfixPath: c:\Virtualisation\Hotfix
    RepositoryUri: <the URI of our ProGet Server, e.g. https://proget.mycorp.com/nuget/PowerShell/package>
  8. Set the default virtual switch for Lability environments to our newly created one using the Set-LabVMDefault command.
  9. Register our VHD base media by calling another script which loads a standard configuration data file. This is separate so we can perform this action independently.
  10. Set the Lability default media to our Windows Server 2012 R2 standard VDH using the Set-LabVMDefault command.
  11. Initialise Lability using our configuration with the Start-LabHostConfiguration command.

Once the install script has completed we have a fully configured host ready to deploy Lability labs.

Deploy-LocalLab script

Lability has a Start-LabConfiguration command which reads the psd1 configuration data file for an environment and creates the VMs. Before running that, however, you need to execute the PowerShell DSC scripts to generate the MOF files for each VM. Lability injects those, and the DSC resources, into the VMs. A second command, Start-Lab boot the VMs themselves, respecting boot order and delays that can be declared in the config file.

This is great unless you have a complex lab and need lots of DSC resources to make it work. Our wrapper script does the following, taking an environment name as a parameter:

  1. Reads the psd1 data file for our environment from the correct folder to identify the DSC resources we need (they are listed for Lability). It installs these resources so we can execute the PowerShell configuration scripts and generate the MOFs.
  2. Reads the psd1 data file to identify the VMs we are deploying. Based on the Role information in that file it will execute each of the configuration ps1 files from the VMs folder hierarchy, passing in the psd1 data file. The resultant MOFs get saved in the Lability configuration folder (c:\Virtualisation\Lability).
  3. Execute the Start-LabConfiguration command passing in the configuration data file.
  4. If we specify a -Start switch, the script starts the lab with the Start-Lab command.

Remove-LocalLab script

Our remove script takes the name of our environment as a parameter. It does the following:

  1. Identifies the VMs in the lab using the Get-LabVM command, passing in the psd1 data file. Check to see if any are running and if they are call the Stop-Lab command.
  2. Executes the Remove-LabConfiguration command, passing in the psd1 data file for the environment.

Virtual Machine Configuration

We’ve challenged ourselves to only use Desired State Configuration for our VMs. This has been a big change from our previous approach to Azure VMs, which mixed DSC with custom PowerShell scripts deployed with a separate Azure VM extension. This has raised four issues we had to solve:

  1. The list of DSC Resources is growing but not all-encompassing. There are many areas where no DSC modules exist. To overcome this, we have used a mix of SetScript code contained within a DSC configuration (which has some limitations) and bespoke DSC modules hosted in our ProGet repository.
  2. Existing Published DSC resources may contain bugs. In many cases code fixing those bugs has been supplied as pull requests but may be undergoing review, and sometimes no new release of the resource has been created. We now have our own separate code repository for DSC resources (including our own) where we keep these and we publish versions to our own repository. When a new official version including the fixes is released it will supersede our own.
  3. There are some good DSC resources out there on GitHub that aren’t published to the PowerShell gallery. We publish these into our own repository for access.
  4. Azure executes the DSC on the target VM to generate the MOF. Lability executes it on the host machine. That and other differences means that we have wrapper code to switch the config sections, mostly based on an input parameter named IsAzure. When called from the Azure DSC extension we specify that parameter and on a Lability host we don’t. I realise that purists will argue that this means we don’t really have a single configuration. I would counter that I have a single configuration file and therefore one thing to maintain. I don’t see any issue with logic inside that config deciding what happens.

Sample Configuration

Let’s illustrate our approach with an extract from a configuration. The code below is part of our DomainController config.

The config accepts some parameters. EnvPrefix is used to generate names within the environment. In Azure we use it to prefix our Azure resources. Within the environment it’s used to create things like the AD domain name. IsAzure tells the config whether it is being executed on the host or on the target VM inside Azure.

You’ll notice that we specify the DSC module versions. There are a few reasons why we do this – because some of the DSC resources are unofficial we want to make sure they come from our repository, and the way Lability downloads DSC resources from our ProGet Server means we need to specify a version number. Either way, we benefit from increased consistency – there have been some breaking changes between versions with the official DSC resources in the PowerShell Gallery!

If we’re in Azure we do things like find the network adapter through code and we don’t specify network addresses. We use the IsAzure parameter to wrapper this stuff in If blocks.

The configuration values come from the psd1 data file, regardless of whether we deploy to Azure or locally. We do this to enforce consistency. Even though we probably could have the Azure config self-contained in the script, we don’t.

 

Configuration DomainController {

    param(
        [ValidateNotNull()]
        [System.Management.Automation.PSCredential]$Credential,

        [string]$EnvPrefix,

        [bool]$IsAzure = $false,

        [Int]$RetryCount = 20,
        [Int]$RetryIntervalSec = 30
    )

    Import-DscResource -ModuleName @{ModuleName="xNetworking";ModuleVersion="3.2.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xPSDesiredStateConfiguration";ModuleVersion="6.0.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xActiveDirectory";ModuleVersion="2.16.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xAdcsDeployment";ModuleVersion="1.1.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xComputerManagement";ModuleVersion="1.9.0.0"}

    $DomainName = $EnvPrefix + ".local"

    Write-Verbose "Processing Configuration DomainController"

    Write-Verbose "Processing configuration: Node DomainController"
    node $AllNodes.where({$_.Role -eq 'DomainController'}).NodeName {
        Write-Verbose "Processing Node: $($node.NodeName)"

        if ($IsAzure -eq $true) {
            #Find the first network adapter
            $Interface = Get-NetAdapter | Where-Object Name -Like "Ethernet*" | Select-Object -First 1
            $InterfaceAlias = $($Interface.Name)
        }
        
        LocalConfigurationManager {
            RebootNodeIfNeeded = $true;
            AllowModuleOverwrite = $true;
            ConfigurationMode = 'ApplyOnly'
            CertificateID = $node.Thumbprint;
            DebugMode = 'All';
        }

        #ignore this is in Azure
        if ($IsAzure -eq $false) {
            # Set a fixed IP address if the config specifies one
            if ($node.IPaddress) {
                xIPAddress PrimaryIPAddress {
                    IPAddress = $node.IPAddress;
                    InterfaceAlias = $node.InterfaceAlias;
                    PrefixLength = $node.PrefixLength;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }


        #ignore this is in Azure
        if ($IsAzure -eq $false) {
            # Set a default gateway if the config specifies one
            if ($node.DefaultGateway){
                xDefaultGatewayAddress DefaultGateway {
                    InterfaceAlias = $node.InterfaceAlias;
                    Address = $node.DefaultGateway;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }

        # Set the DNS server if the config specifies one
        if ($IsAzure -eq $true) {
            if ($node.DnsAddress){
                xDNSServerAddress DNSaddress {
                    Address = $node.DnsAddress;
                    InterfaceAlias = $InterfaceAlias;
                    AddressFamily = $node.AddressFamily;
                }
            }
        } 
        else {
            if ($node.DnsAddress){
                xDNSServerAddress DNSaddress {
                    Address = $node.DnsAddress;
                    InterfaceAlias = $node.InterfaceAlias;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }
            
    }

#End configuration DomainController
}

Sample Data File

Below is a sample data file for an environment containing a Domain Controller and single domain-joined server. Note that the data file contains a mix of data to be processed by the DSC configuration and Lability-specific information that defines the environment, including VM settings and the required DSC resources. When we deploy the lab locally, Lability processes the file to create the Virtual Machines and their hard disks (and create new virtual switches if we declare them). When we deploy in Azure this information is ignored – we can safely use the same data file in both situations.

# Single Domain Controller Lab

@{
    AllNodes = @(
        @{
            # DomainController
            NodeName = "DC";
            Role = 'DomainController';
            DSdrive = 'C:';
            
            #Prevent credential error messages
            PSDscAllowPlainTextPassword = $true;
            PSDscAllowDomainUser = $true;


            # Networking
            IPAddress = '192.168.254.2';
            DnsAddress = '127.0.0.1';
            DefaultGateway = '192.168.254.1';
            PrefixLength = 24;
            AddressFamily = 'IPv4';
            DnsConnectionSuffix = 'lab.local';
            InterfaceAlias = 'Ethernet';


            # Lability extras
            Lability_Media = 'BM_Server_2012_R2_Standard_x64';
            Lability_ProcessorCount = 2;
            Lability_StartupMemory = 2GB;
            Lability_MinimumMemory = 1GB;
            Lability_MaximumMemory = 3GB;
            Lability_BootOrder = 0;
            Lability_BootDelay = 600;
        };
        @{
            # MemberServer
            NodeName = "SR01";
            Role = 'MemberServer';
            DSdrive = 'C:';
            
            #Prevent credential error messages
            PSDscAllowPlainTextPassword = $true;
            PSDscAllowDomainUser = $true;


            # Networking
            IPAddress = '192.168.254.3';
            DnsAddress = '192.168.254.2';
            DefaultGateway = '192.168.254.1';
            PrefixLength = 24;
            AddressFamily = 'IPv4';
            DnsConnectionSuffix = 'lab.local';
            InterfaceAlias = 'Ethernet';


            # Lability extras
            Lability_Media = 'BM_Server_2012_R2_Standard_x64';
            Lability_ProcessorCount = 2;
            Lability_StartupMemory = 2GB;
            Lability_MinimumMemory = 1GB;
            Lability_MaximumMemory = 3GB;
            Lability_BootOrder = 1;
        };

    );

    NonNodeData = @{
        OrganisationName = 'Lab';

        Lability = @{
            EnvironmentPrefix = 'Lab-';

            DSCResource = @(
                @{ Name = 'xNetworking'; RequiredVersion = '3.2.0.0';}
                @{ Name = 'xPSDesiredStateConfiguration'; RequiredVersion = '6.0.0.0';}
                @{ Name = 'xActiveDirectory'; RequiredVersion = '2.16.0.0';}
                @{ Name = 'xAdcsDeployment'; RequiredVersion = '1.1.0.0';}
                @{ Name = 'xComputerManagement'; RequiredVersion = '1.9.0.0';}
            );
        }

    };
};

Azure DSC Extension

Our Azure deployment uses the configuration and data file to configure the VM. The JSON for the DSC extension is shown below. Notice the following:

1. The modulesUrl setting specifies a Zip file that contains the DSC resources and configuration ps1 file. We create these zip files as part of our build process and upload them to an Azure storage account.

2. The configurationFunction setting specifies the name of the ps1 file to execute and the configuration within that we want to apply (a single file can contain more than one configuration, although ours don’t).

3. We pass in the EnvPrefix variable and set the IsAzure value to 1 so our configuration executes the right code.

4. The dataBlobUri within protectedSettings is our psd1 data file. The extension treats this as containing sensitive information – things held in this section are not displayed in any output from Azure Resource Manager.

In fairness, whilst at the moment we create JSON specific to each VM, I plan to refactor this to be common code that takes parameters rather than having an ARM template for each VM’s DSC.

      {
        "name": "[concat(parameters('envPrefix'),parameters('vmName'),'/',parameters('envPrefix'),parameters('vmName'),'dsc')]",
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "location": "[parameters('VirtualNetwork').Location]",
        "apiVersion": "[parameters('ApiVersion').VirtualMachine]",
        "dependsOn": [
        ],
        "tags": {
          "displayName": "DomainController"
        },
        "properties": {
          "publisher": "Microsoft.Powershell",
          "type": "DSC",
          "typeHandlerVersion": "2.1",
          "autoUpgradeMinorVersion": true,
          "settings": {
            "modulesUrl": "[concat(parameters('artifactsLocation'), '/Environments/', parameters('envConfig'),'/',parameters('envConfig'),'.zip', parameters('artifactsSasToken'))]",
            "configurationFunction": "DomainController.ps1\\DomainController",
            "properties": {
              "EnvPrefix": "[parameters('EnvPrefix')]",
              "Credential": {
                "userName": "[parameters('adminUsername')]",
                "password": "PrivateSettingsRef:adminPassword"
              },
              "IsAzure": 1
            }
          },
          "protectedSettings": {
            "dataBlobUri": "[concat(parameters('artifactsLocation'), '/Environments/', parameters('envConfig'), '/', parameters('envConfig'),'.psd1', parameters('artifactsSasToken'))]",
            "Items": {
              "adminPassword": "[parameters('adminPassword')]"
            }
          }
        }
      }

We don’t include the DSC extension within the ARM template that deploys the VM because by doing so we can sequence the deployment of configuration to deal with dependencies between servers.

Azure ARM Templates

The approach we take to deploying VMs in Azure has been consistent for some time now. My ResourceTemplates Repo in GitHub uses nested templates to deploy a three-server environment and we use exactly the same approach here. Our ‘master template’ is stored in the environment folder and it calls nested deploys for each VM, VM DSC extension and supporting stuff such as virtual networks. The VM and DSC templates are stored in the VM folder with the DSC config, and the supporting templates are in our Modules\Templates folder since they are shared.

Conclusion

This has been a very long article without a great deal of code in it. I hope this explains how we approach our environment definition and deployment. I plan to do more posts that document more specific elements of a configuration or an environment.

Ultimately, I’m not sure that the goal of a single definition that covers multiple platforms and both host and guest configurations exists. However, I think we’ve got pretty close with our solution and it has minimal rework involved, particularly once you have built up a good library of VM configs that you can combine into an environment.

I should also point out that we are not installing apps – we are deploying a platform onto which our developers and testers can then install the applications they develop. This means that we keep the environments quite generic. Deployment of apps is still scripted (and probably uses VSTS Release Management) but is not included in the configurations we build. Having said that, there is nothing stopping a team extending the DSC to deploy their applications and thus build a more bespoke definition.

I’ve spoken to quite a few people about what we’ve done over the past few weeks and, certainly within the Microsoft space many people want to do what we have done, but few were aware that tooling such as Lability and DSC were available to get it done. I hope this goes some way to plugging that gap.

Notes from the field: Using Hyper-V Nat Switch in Windows 10

The new NAT virtual switch that can be created on Windows 10 for Hyper-V virtual machines is a wonderful thing if you're an on-the-go evangelist like myself. For more information on how to create one, see Thomas Maurer's post on the subject.

This post is not about creating a new NAT switch. It is, however, about recreating one and the pitfalls that occur, and how I now run my virtual environment with some hack PowerShell and a useful DHCP server utility.

Problems Creating Nat Switch? Check Assigned IP Addresses

I spent a frustrating amount of time this week trying to recreate a NAT switch after deleting it. Try as I might, every time I executed the command to create the new switch it would die. After trial and error I found that the issue was down to the address range I was using. If I created a new switch with a new address range everything worked, but only that one time: If I deleted the switch and tried again, any address range that I'd used would fail.

This got me digging.

I created a new switch with a new address range. The first thing I noticed was that I had a very long routing table. Get-netroute showed routes for all the address ranges I had previously created. That let me to look at the network adapter created by the virtual switch. When you create a new nat switch the resulting adapter gets the first IP address in the range bound to it (so 192.168.1.0/24 will result in an IP of 192.168.0.1). My adapter had an IP address for every single address range I'd created and then deleted.

Obviously, when the switch is removed the IP configuration is being stored by windows somewhere. When a new switch is created all that old binding information is reapplied to the new switch. I'm not certain whether this is related to the interface index, name or what, since when I remove and re-add the switch on my machine it always seems to get the same interface index.

A quick bit of PowerShell allowed me to rip all the IP addresses from the adapter at once. The commands below are straightforward. The first allows me to find the adapter by name (shown in the Network Connections section of control panel) - replace the relevant text with the name of your adapter. From that I can find the interface index, and the second command gets all the IPv4 addresses (only IPv4 seems to have the problem here) and removes them from the interface - again, swap your interface index in here. I can then use PowerShell to remove the VMswitch and associated NetNat object.

Get-NetAdapter -Name "vEthernet (NATSwitch)"
Get-NetIPAddress -InterfaceIndex 13 -AddressFamily IPv4 | Remove-NetIPAddress

Once that's done I can happily create new virtual switches using NAT and an address range I've previously had.

Using DHCP on a NAT switch for ease

My next quest was for a solution to the IP addressing conundrum we all have when running VMs: IP addresses. I could assign each VM a static address, but then I have to keep track of them. I also have a number of VMs in different environments that I want to run and I need external DNS to work. DHCP is the answer, but Windows 10 doesn't have a DHCP server and I don't want to build a VM just to do that.

I was really pleased to find that somebody has already written what I need: DHCP Server for Windows. This is a great utility that can run as a service or as a try app. It uses an ini file for configuration and by editing the ink file you can manage things like address reservations. Importantly, you can choose which interface the service binds to which means it can be run only against the virtual network and not a use issues elsewhere.

There's only one thing missing: DNS. Whilst the DHCP serer can run it's own DNS if you like, it still has a static configuration for the forwarder address. In a perfect world I'd like to be able to tell it to had my PCs primary DNS address to clients requesting an IP.

Enter PowerShell, stage left...

Using my best Google-fu I tracked down a great post by Lee Homes from a long time ago about using PowerShell to edit ini files through the old faithful Windows API calls for PrivateProfileString. I much prefer letting Windows deal with my config file than write some complex PowerShell parser.

I took Lee's code and created a single PowerShell module with three functions as per his post which I called Update-Inifiles.psm1. I then wrote another script that used those functions to edit the ini file for DHCPserver.

It's dirty and not tested on anything but my machine, but here it is:

import-module C:\src\Update-IniFiles.psm1

$dnsaddr = (Get-DnsClientServerAddress -InterfaceIndex (get-netroute -DestinationPrefix 0.0.0.0/0)[0].ifIndex -AddressFamily IPv4).ServerAddresses[0]

if ($dnsaddr.Length -gt 0)
{
Set-PrivateProfileString "C:\Program Files\DHCPSrv\dhcpsrv.ini" GENERAL DNS_0 $dnsaddr
}
else
{
Set-PrivateProfileString "C:\Program Files\DHCPSrv\dhcpsrv.ini" GENERAL DNS_0 8.8.8.8
}

The second line is the one that may catch you out. It gets the DNS server information for the interface that is linked to the default IPv4 route. On my machine there are multiple entries returned by the get-netroute command, so I grab the first one from the array. Similarly, there are multiple DNS servers returned and I only want the first one of those, too. I should really expand the code and check what's returned, but this is only for my PC - edit as you need!

Just in case I get nothing back I have a failsafe which is to set the value to the Google public DNS server on 8.8.8.8.

Now I run that script first, then start my DHCP server and all my VMs get valid IP information and can talk on whatever network I am connected to, be it physical or wireless.


Generation 2 Virtual Machines on Windows 8.1 and Server 2012 R2 plus other nice new features

DDD North 2013 was a fantastic community conference but sadly I didn’t get chance to deliver my grok talk on Generation 2 virtual machines. A few people came up to me beforehand to say they were interested in the topic, and a few more spoke to me afterwards to ask if I would blog. I had planned to write a post anyway, but when you know it’s something people want to read you get a bit more of a push.

This post will cover two areas of Hyper-V in Windows 8.1 and Server 2012: Generation 2 virtual machines which are completely new and a number of changes that should apply to all VMs, be they gen 1 or gen 2. What I not going to cover, as it’s a post all of it’s own, is the new and improved software-defined-networking in hyper-v.

Generation Next

As you can see in the screenshot below, when creating a virtual machine in the Windows 8.1 and Server 2012 you are asked which generation of VM you want. The screen gives a brief and reasonable summary of what the differences are… to a point.

image

Generation 1 virtual machines are a mix of synthetic and emulated hardware. This goes all the way back to previous virtualisation solutions where the virtual machine was usually a software emulation of the good old faithful Intel 440BX motherboard.

  • The emulated hardware delivered a high level of compatibility across a range of operating systems. Old versions of DOS, Windows NT, Netware etc would all fairly happily boot and run on the 440BX hardware. You didn’t get all the cleverness of a guest that knew it was inside a VM but it worked.
  • PXE (network) boot was not possible on the implementation of the synthetic network adapter in Hyper-V. That meant that you had to use the emulated NIC if you wanted to do this.
  • Virtual hard disks could be added to the virtual SCSI adapter whilst the machine was running, but not the IDE adapter. You couldn’t boot from a SCSI device, however, so many machines had to have drives on both devices.
  • Emulated keyboard controllers and other system devices were also implemented for compatibility.

Generation 2 virtual machines get rid of all that legacy, emulated hardware. From what I’ve read and heard, all the devices in a generation 2 VM are synthetic, software generated. This makes the VM leaner and more efficient in how it uses resources, and potentially faster as gen 2 VMs are much closer to the kind of hardware found in a modern PC.

There are three key changes in Gen as far as most users are concerned:

  • SCSI disks are not bootable. There is no IDE channel at all; all drives (VHD or virtual optical drive) are now on the SCSI channel. This is far simpler than before.
  • Synthetic network adapters support PXE boot. Gone is the old legacy network adapter.
  • The system uses UEFI rather than BIOS. That means you can implement secure boot on a VM. Whilst this might sound unnecessary it could be of great interest to organisations where security is key.

The drawback of gen 2 is that, right now, only Windows 8, Server 2012 and their respective new updated versions can be run as a guest in a gen 2 VM. I’m not sure that this will change in terms of Microsoft operating systems, but I do expect a number of Linux systems to be able to join the club eventually. I have done a good deal of experimentation here, with a large range of Linux distributions. Pretty much across the board I could get the installation media to boot but install failed because the hardware was unknown. What this means is that when Microsoft release new versions of the hyper-v kernel additions for Linux we should see support expand in this regard.

The screenshot below shows the new hardware configuration screen for a generation 2 virtual machine. Note the much shorted list of devices in the left hand column:

image

Useful changes across generations

There have been some other changes that, in theory, span generations. More on that in a bit.

Drives

When Server 2012/Windows 8 arrived, Microsoft added bandwidth management for VMs. That useful for IT pros who want to manage what resources servers can consume but it’s also jolly handy for developers who would like to try low bandwidth connections during testing. We can’t do anything about latency with this approach, but it’s nice to be able to dial a connection down to 1Mb to see what the impact is.

Server 2012 R2/Windows 8.1 add a similar option for the virtual hard drive. We can now specify QoS for the virtual hard disks, in IoPs. The system allows you to set a minimum and maximum. It’s important to remember here that this does depend on the physical tin beneath your VM. I run two SSDs in my laptops now, but before that my VMs ran on a 5400rpm drive. Trying to set a high value for minimum IoPs wouldn’t get me very far here. What is more useful, however, is being able to set the maximum value so we can start to simulate slow drives for testing.

As with network bandwidth management, I think this is also a great feature for IT pros who need to manage contention between VMs and focus resource on key machines.

The screenshot below shows the disk options screen with QoS and more.

image

Also new is the ability to resize a VHD that is attached to a running machine. This is only possible with disks attached to SCSI channels, so gen 2 VMs may get more benefit here. Additionally, VHDs can now be shared between VMS. Again, this is SCSI only but this is a really useful change because it means we can build clusters with shared storage hosted on VHDs rather than direct attached iSCSI or fibrechannel. The end result is to make more options available to the little guys who don’t have the resources for expensive tin. It’s also great for building test environments that need to mirror those of a customer – we do that all the time and it’s going to give us lots of options.

Networks

I already said that I’m not going to dive into the new software-defined-networking here. If terms like NVGRE get you excited then there are people with more knowledge of comms than I have writing on the subject. Suffice to say it looks really useful for IT pros but not really for developers, I don’t think.

Also not much use for developers but incredibly useful for developers is the new Protected Network functionality. The concept of this is really simple and so, so useful:

Imagine you have a two node cluster. Each node has a network connection for VMs, not shared by the host OS, and one for the OS itself that the cluster uses. Node 1 suddenly loses connectivity on the VM connection. What happens? Absolutely nothing with Server 2012 because the VMs are still running and nothing knows that the VMs no longer have connectivity. With Server 2012 R2/Windows 8.1 you can enable protect network for the virtual adapter. Now, the systems are checking connectivity to the VM and in our scenario all the VMs on node 1 will fail merrily over to node 2, which still has a connection.

I know we will find this new feature useful on our clustered, production VM hosts. Again, this really helps smaller organisations get better resilience from simpler hardware solutions.

The screenshot below shows the advanced options for a network adapter with network protection enabled.

image

Enhanced session mode

I said that, in theory, many of the new changes are pan-generation (and pan-guest OS). According to the documentation, enhanced session mode should work on more than just Windows 8.1 or Server 2012 FR2 guest operating systems. In practice, I have not found this to be the case, even after updating the VM additions on my machines to the latest version.

It is useful, however. When you enable enhanced session mode then, providing you have enabled remote desktop on the guest, this will be used to connect to the VM. Even if the guest has no network connection to the host OS, or even a network adapter!).

The screenshot below shows the option for enhanced session mode. This is enabled by default in Windows 8.1 and disabled by default in Server 2012 R2.

image

When you have the option enabled you will see a new button on the right of the toolbar, as shown in the image below.

image

That little PC with a plus symbol toggles the VM connection between old-style and the new, RDP-based connection. The end result is that you get more screen resolution choices, you can copy and paste properly between your host and the VM (no more paste keystrokes and you can copy files and documents!) and all the USB device pass-through from the host works too.

For developers working inside a VM this is is great – no more needing network connections to be able to RDP into a box. That means that you can run sensitive VMs, or multiple copies of a VM on multiple machines much more easily than before. If you enable the new connection mode on a VM, and restart it, when the VM begins to boot it connects in the old way, but as soon as it detects the RDP service on the guest you get a dialog asking you for the new resolution and it swtiches to the RDP style connection. It’s great.

I’m hoping that there will either be updates for older Microsoft OS versions, or updated VM additions that will give a consistent result that I have no so far experienced. In theory, updates to the Linux kernel additions could also add this new connection type, but again, so far my experience is that it doesn’t work right now.

Summary

To sum up then:

  • Generation 2 VMs – leaner, meaner and simpler all round but limited to the latest Microsoft desktop and server OS’. I can’t see a reason not to use them for the latest OS version.
  • Disk QoS – should be really useful for dev/test when you need to simulate a slow drive. Great for IT pros to manage environments with a mix of critical and non-critical VMs.
  • Online VHD resizing. There are so many times I’ve needed this on dev/test in the last few months alone. Shame it’s SCSI only so you can’t grow the OS disk on a gen 1 VM but you can’t have everything.
  • Shared VHD. Another useful new option that will help building dev/test environments and will also be useful for smaller organisations who want to build things like virtualised clustered file servers using a cluster shared volume (CSV).
  • Network protection. Great for IT pros running host clusters. Can’t see a use for devs.
  • Enhanced session mode. Useful all round, especially for devs who want to easily work on a VM. Useful for IT pros who need to copy stuff on to running VMs, but so far my experience is mixed as it only works on Windows 8.1 and Server 2012 guests.

Windows 8.1 is already on MSDN and TechNet so if you’re a dev or IT Pro with the right subscriptions, why aren’t you trying this stuff already? For everybody else, the 18th of this month sees general availability and I expect evaluation media will be available for you to play with.

Fixing Lab Manager environments with brute force

As you’ve probably seen, our Lab Manager/SCVMM 2008 R2 upgrade to SCVMM 2012 SP1 was not the smoothest in the world. The end result was a clean lab manager and SCVMM install, but a raft of virtual machines that had previously been part of environments.

In tidying up, Richard and I learned a few things about picking apart VMs that were once part of an environment such that a new environment could be built form the wreckage.

There are two approaches to getting what you need: Firstly, you could simply compose the existing virtual machines into a new environment without storing in, and deploying from SCVMM. Secondly, you could pull the VMs back into SCVMM such that you could build a new environment.

Don’t forget to fix the networks

If you want to use the running VMs you will need to make sure that you have recreated any private network generated by Lab Manager. These are all helpfully listed in the XML configuration file of the VMs. They are normally named Lab_<GUID>_NI so are easy to find in the file. On the hyper-v host, using hyper-v manager you will need to create a new private virtual network with the name you just found. You should then attach the synthetic network adapter of your VMs (not the legacy network adapter) to this private network. If you have a DC, and you told Lab Manager it was a DC, then you are likely to need to hook its legacy adapter to the private network as well.

Scenario 1: Pull existing machines into an environment

The big problem you are likely to find here is that whilst you have imported the VMs onto your hyper-v server and SCVMM can see the machines just fine, Lab Manager refuses to show them to you.

The reason for this is that Lab Manager believes the VMs are currently part of an environment, just not one it currently has. It therefore hides the VMs from you. It turns out that this is pretty straightforward to fix. In the notes field of the running VM settings you will see a block of XML. That is read by Lab Manager to identify the VMs in environments. Simply delete that xml and the machine will now show up in Lab Manager as being available to compose into an enviroment.

Scenario 2: Get the VMs back into SCVMM to build a new environment and deploy it.

This is a trickier situation and one which needs to follow the steps I talked about in my previous post about building VMs for Lab Manager.

The problem here is not just the XML, but that Lab Manager has probably mangled the hardware settings of the VM as well. You will need to tidy each VM before storing it in SCVMM ready for Lab Manager:

  • Remove the XML from the notes field.
  • Remove the legacy network adapter.
  • Configure the network adapter within windows to use an IP address and DNS handed to it from DHCP.
  • Delete any snapshots.
  • Make sure you cleanly shut down the VM – don’t save it!

If you follow those steps you can store the VMs back into SCVMM then build a new environment from the stored VMs. If this still gives you trouble then you should export the VMs from hyper-v, reimport them as a copy to get a new unique ID and then push those into SCVMM.

So far this has worked just fine for us with Richard working his magic in Lab Manager whilst I fix up VMs in hyper-v and SCVMM.

Things to remember when building virtual machines for a lab manager environment

As you will have read on both mine and Richard’s blogs, we have recently upgraded our Lab environment and it wasn’t the smoothest of processes.

However, as always it has been a learning experience and this post is all about building VM environments that can be sucked into Lab and turned into a Lab environment that can be pushed out multiple times.

Note:  This article is all about virtual machines running on Windows Server 2012 that may have been built on Windows 8 and are managed by SCVMM 2012 SP1 and Lab Manager/TFS 2012 CU1. Whilst the things I have found in terms of prepping VMs for Lab Manager are likely to be common to older versions, your mileage may vary.

Approaches to building environments

There are a number of approaches to building multi-machine environments that developers can effectively self-serve as required:

  • The ALM Rangers have a VM Factory project on Codeplex which aims to deliver scripted build-from-scratch on demand.
  • SCVMM has templates for machines that are part-built and stored after running sysprep. Orchestrator can then be used to deploy templates and run scripts to wire them together.
  • Lab Manager allows you to take running VMs and group them together into an environment. It stores all the VMs in SCVMM and when requested, generates new VMs by copying the ones from the library.

Trouble at ‘mill

There are also a number of problems in this space that must balance the needs of IT pros with the needs of developers:

  • Developers are an impatient bunch. They will request the environment at the last minute and need it deployed as quickly as possible. This doesn’t necessarily work well with complete bare-metal scripted approaches.
  • Developers would also prefer some consistency – if they have to remember one set of credentials it’s probably too much. Use different accounts and passwords and machine names for all your environments and it can get trick.
  • Developers love to use the Lab Manager and Test Manager tooling. This delivers great integration with the Team Project in Team Foundation Server.
  • IT Pros need to deal with issues caused by multiple machines with the same identities sharing a network. This is especially true of domain controllers.
  • IT pros would like to keep the number of snapshots (SCVMM checkpoints) to a minimum, especially when memory images are in play as well.
  • IT pros would prefer the environments used by the developers to match the way things are installed in the real world. This is less critical for the actual development environment but really important when it comes to testing. This tends to lead to requirements for additional DNS entries and multiple user accounts. This is especially true if you are building SharePoint farms properly.

How IT pros would do it…

Let’s use one of our environments as an example. We have a four server set:

  1. The Domain Controller is acting as DNS and also runs SQL Server. It doesn’t have to do the latter, but we were trying to avoid an additional machine. Reporting services and analysis services are installed and reporting services is listening on a host header with a DNS CNAME entry for it.
  2. An IIS server allows for deployment of custom web apps.
  3. A CRM 2011 server is using the SQL instance on the DC for its database and reporting services functions. The CRM system itself is published on another host header.
  4. A SharePoint 2010 server is using the SQL instance as well. It has separate web applications for intranet and mysites and each is published on a separate host header.

If we were building this without lab manager then we would give the machines two NICs. One would be on our network and the other on a private network. On the DC we unbind the nasty windows protocols from our network. Remote desktop is enabled on all machines for the devs to access it.

Lab Manager complicates matters however. It is clever enough to understand that we might need to keep DC traffic away from our network and has a mechanism to deliver this, called Network Isolation. How it actually goes about that is somewhat problematic, however.

Basically, Lab Manager wants to control all the networking in the new environment. To do that it adds new network adapters to the VMs and it uses those new adapters to connect to the main network. It expects a single adapter to be in the original VM, which it connects to a new private network that it creates.

Did I mention that IT pros hate GUIDs? Lab Manager loves them. Whilst I can appreciate that it’s the best way to generate unique names for networks and VMs it’s a complete pain to manage.

Anyway, it’s really, really easy to confuse Lab Manager. Sadly, if the IT pro builds what they consider to be a sensible rig, that will confuse Lab Manager right away. The answer is that we need to build our environment the right way and then trim it in readiness for the Lab Manager bit.

Building carefully

I would build my environment on my Windows 8 box. I create a private network and use that as a backbone for the environment. I assign fixed IP addresses to each server on that network. Each server uses the DC as its DNS. That way I can ensure everything works during build. I also add a second NIC to each box that is connected to my main network. I carefully set the protocols that are bound to that NIC. Both of those network adapters are what lab manager calls ‘synthetic’ – they are the native virtualised adapter hyper-v uses, not the emulated legacy adapter.

I carefully make sure that all host header-required DNS entries are created as CNAMEs that point to the host record for the server I need. This is important because all the IP addresses will change when Lab Manager takes over.

I may make snapshots as I build so I can move back in time if something goes wrong.

When built, I will probably store my working rig so I can come back to it later. I will then change the rig, effectively breaking it, in order to work with Lab Manager.

The Lab Manager readiness checklist

  • Lab Manager will fail if there is more than a single network adapter. It must be a synthetic adapter, not a legacy one. The adapter should be set to use DHCP for all its configuration – address and DNS.
  • Install, but do not configure the Visual Studio Test Agent before you shut the machines down. We’ve seen Lab fail to install this many times, but if it’s already there it normally configures it just fine.
  • Delete all the snapshots for the virtual machine. Whilst Lab Manager can cope with snapshots, both it and SCVMM get confused when machines are imported with different configurations in the snapshots from the final configuration. It will stop Lab Manager in its tracks.
  • Make sure there is nothing in the notes field of the VM settings. Both Lab Manager and SCVMM shove crap in there to track the VM. If anybody from either team is listening, this is really annoying and gets in the way of putting notes about the rigs in there. Lab Manager shoves XML in there to describe the environment.
  • Make sure there are no saved states. Your machines need to be shut down properly when you finish, before importing into SCVMM. The machines need to boot clean or they will get very confused and Lab Manager may struggle to make the hardware changes.
  • Make sure you export the machines – don’t just copy the folder structure, even though its much easier to do.

Next, get it into SCVMM

There is a good reason to export the VMs. It turns out that SCVMM latches on to the unique identifier of the VM (logical, if you think about it). The snag with this is that you can end up with VMs ‘hiding’. If I copy my set of four VMs to an SCVMM library share I can’t have a copy running as well. Unless you do everything through SCVMM (and for many, many reasons I’m just not going to!) you can end up with confusion. This gets really irritating when you have multiple library shares because if you have copies of a VM in more than one library, one will not appear in the lists in SCVMM. There are good reasons why I might want to store those multiple copies.

Back to the plot. SCVMM won’t let us import a VM. We can construct a new one from a VHD but I have yet to find a way to import a VM (why on earth not? If I’ve missed something please tell me!). So, we need to import our VMs onto a server managed by SCVMM. We have a small box for just this purpose – it’s not managed by Lab Manager but is managed by our SCVMM so I can pull machines from it into the library.

Import the VMs onto your host using Hyper-V manager. Make sure you create sensible folder structures and names for them all. Once they are imported make sure you close hyper-v manager. I have seen SCVMM fail to delete VM folders correctly because hyper-v manager seems to have the VHD open for some reason.

In SCVMM, refresh the host you’ve just imported the VMs to. You should see them in the VM list. I tend to refresh the VMs too, but that’s just me. Start the VMs and let SCVMM get all the information from them like host name etc. I usually leave them for a few minutes, then shut them down cleanly from the SCVMM console.

Now we know SCVMM is happy with them, we can store the VMs in the SCVMM library that Lab Manager uses. You should see them wink out existence on the VM host once the store is complete.

Create the Lab environment

At this point the IT guys can hand over to the people managing labs. In our case that’s Richard. He can now compose a new environment within Lab Manager and pull the VMs I have just stored into his lab. He tells the lab that it needs to run with network isolation and identifies the DC.

What Lab Manager will then do is deploy a new VM through SCVMM using the ones I built as a source. It will then modify the hardware configuration of the VMs, adding a legacy network adapter. It also configures the MAC address of the existing synthetic adapter to be static.

A new private virtual network is created on the target VM host. It’s really hard to manage these through SCVMM so if Lab ever leaves them hanging around I delete them using hyper-v manager. The synthetic adapters in the VMs are connected to the private network while the legacy adapters are connected to the main network.

Exactly why they do it this way I’m not sure. Other than needing legacy adapters for PXE boot (which this isn’t doing) I can’t see why we’re using legacy adapters. I am assuming the visual studio team selected them for a good reason, probably around issuing commands to the VMs, but I don’t know why.

When the environment is started, Lab will assign static IP addresses to the NICs attached to the private network. All ours seem to be 192.168.23.x addresses. It will also set the DNS address to be that which has been assigned to the DC in the lab. The legacy adapters will be set to DHCP for all settings. The end result is a DC that is only connected to the private network and all other machines connected to both private and main networks.

Once the environment is up, Lab Manager should configure the test agent and you’re off. The new lab environment can then be stored in such a way as to allow multiple copies to be deployed as required by the devs.

Notes from the field on our SCVMM/Lab Manager environment upgrade

Richard has posted a group effort article on his blog about our System Center 2008 R2/Lab Manager upgrade to System Center 2012 SP1/Lab Manager. All did not go swimmingly…

I have more helpful notes that I am writing up myself and will post over the next few days around the steps to fix virtual machines that are part of an environment and tips on building complex multi-machine rigs for lab manager.

A Virtual Ice Cream Sandwich: Android 4 x86 in a Hyper-V VM

More and more of our projects include a stipulation from the client that any web sites must work on the tablet devices of senior management. Up until recently that was exclusively iPads, but we are now seeing more Android devices out there. I wanted to find a straightforward way for us to test on such devices, preferably without needing to build up a collection of expensive physical kit.

I read with interest Ben Armstrong’s post about running Android 2.2 (Froyo) in a VM using a build from the Android x86 project. I started my journey by replicating his steps, so I won’t document any of that here, other than to note that the generic x86 build you need is now a deprecated one, so I had to hunt a little to find what I needed.

Creating the VM was a doddle. However, once I’d got things up and running I hit a snag: The sites I needed to test were hosted on SharePoint and required authentication. The web browser on the Android 2.2 build steadfastly refused to present a logon dialog for any sites. I could rework my test sites with anonymous access or forms-authentication but that didn’t fill me with enthusiasm. I wondered, then, if a later Android version might be my salvation.

That in itself led to a long time spent digging around the corners of the internet: The Android x86 project has a number of Ice Cream Sandwich builds but all are targetted at various types of hardware device and whilst all had support for wifi, none had support for ethernet. Since I can’t present a wifi device within the Hyper-V VM I had to look elsewhere.

The build I finally used was one I found at tabletsx86.org – an Android 4 build with experimental ethernet support.

I ran through a number of installations as I edged my way through the different options each time I found that a choice I’d made prevented me from making some essential tweak. To save you all the effort, I’ve documented the steps here. Since I was a complete Android novice I’ve taken the approach of showing screenshots of every step for other novices like myself.

Step 1: Getting things installed

The Virtual Machine we need to create doesn’t have to be powerful. However, we are running an OS that is not Hyper-V aware, so we can’t just go with the defaults.

I created a machine with 512Mb of RAM and a single processor. I started with a 16Gb virtual disk as the hard drive but after a few passes I increased that to 32 to give me some headroom should I want to install apps later. The important step, however, is that you need to add a Legacy Network Adapter and remove the standard virtual adapter that Hyper-V will add.

hyper-v settings

Once you’ve got your VM built, insert the ISO for the Android 4 build into the DVD drive and boot the machine.

Select the option to install Android to the hard disk of the machine.

screen1

On the next screen choose Create/Modify partitions

screen2

In the partition editor, left and right cursor keys will move between the menu items; enter will select. Choose New to create a new partition.

screen3

You want to create a new primary partition

screen4

The utility defaults to the full size of the disk. Simply hit enter to confirm that.

screen5

Now we have our partition we need to mark it as bootable.

screen6

And finally we need to write the changes out to disk.

screen7

Now we have our partition we can exit the utility to continue the installation.

screen8

The installer will now show our new partition and allow us to select it as the target for the installation.

screen9

We then need to choose what format to use for the installation. I used ext3. I did try NTFS once, thinking that I could easily transfer files onto the system, but when I attached the VHD windows failed to recognise the file system, so I went back to Ext3, figuring I’d simply transfer stuff over the network.

screen10

Unsurprisingly, the installer asks for confirmation of the format.

screen11

Then it shows progress as it formats.

screen12

Next you need to install the Grub bootloader. Honestly, I’ve not tried without this, but I modify the bootloader options later so unless you want to plough your own furrow, install Grub.

screen13

The default option at the next step is to install the system directory as read only. I discovered very quickly that some of the things I might need to fiddle with are in that system directory so I’ve chosen to make it writable.

screen14

Now the installation occurs.

screen15

Once the installation is complete you should choose to create a fake SD card. I learned the hard way that if you don’t, saving stuff in your Android web browser won’t work.

screen16

Sadly the largest size we can create is 2Gb, which conveniently is the default.

screen17

Once again we get a progress bar whilst the SD card image is created.

screen18

Now we’re all done and we get the option to reboot. Note that you can’t eject the installation media yet – it’s locked, so you’ll have to reboot.

screen19

When the VM reboots you’ll be back at the first screen, allowing to choose to install or run the live CD. Turn the VM off so you can eject the media.

At this point the installation is done. You have a shiny new Android VM running Ice Cream Sandwich.

Step 2: The Android wizard

This isn’t difficult at all, other then you need to remember that when you click on the VM to capture the mouse, it’s really emulating your finger. That means that you need to click and drag in drop down menus. I also discovered that the right mouse button seems to act as the hardware back button. Clicking the mouse is equivalent to tapping with your finger.

I set the language to UK English as my first step.

screen21

Then the wizard will burble for a little while.

screen22

I chose to automatically set the time. Think grey outlines of check boxes are hard to see when they are on a black background!

screen23

The next step allows you to use your Google account to keep settings an stuff. I’m building a VM that will be generic and used by lots of people so I skip this one.

screen24

I am happy to use location services though – we want to use this thing for testing, after all.

screen25

Again, because this is a build for lots of users I’ve put the company in as the owner name. Note that even though we chose United Kingdom as the location, the keyboard setting is for a US keyboard.

screen26

Next we get an obligatory screen where we agree to stuff…

screen27

…and we’re done.

screen28

The system helps you through how to use it. The import bits are the icons at the bottom. The upward pointing outline of an arrow in the middle brings you back to the home screen.

screen29

A handy tip

This thing feels a lot like Linux to me. Conveniently, pressing alt+f1 will switch to a console screen. Alt+left arrow and alt+right arrow will switch between consoles and the graphical UI.

Inside the console you can use familiar tools like ping and nslookup. It’s not a full-fat linux box, mind you. The two commands I find myself using most in the console are reboot and halt. Odd that there’s no way to cleanly shutdown – no shutdown command or even an old school init 0!

A couple of minor hiccups

Having got my VM up and running and gone through the startup wizard in Android there were a few things not quite right. First of all the screen resolution was too low at only 800x600. Step forward my very rusty Linux experience and my much less rusty internet research expertise!

More worryingly, when I boot the machine it doesn’t always pick up the correct DNS settings.  Research showed that to be much more interesting. Strangely, things worked at home but not in the office. Research showed that it was to do with the DHCP responses being different on the two networks: The office network was not responding to the request for DHCP option 119 – domain suffix search order. Fixing that solved the problem (but that’s another can of worms and I’ll write up a separate post about that one!).

Step 3: Setting the screen resolution

This one turned out to be quite easy, although it involves using Vi, which is a text editor whose arcane commands I have very limited knowledge of.

The first thing we need to do is find information about what display modes are available. To do this we boot the VM and use the options available to modify the boot parameters. Be aware that when you boot the VM the Grub screen only shows for a few seconds before the first option is booted automatically. When you see the screen below, hit the ‘a’ key to easily append options to the boot command.

screen30

When you hit ‘a’ you will be presented with the boot command to edit. Options on the command line are separated by spaces. Add a new one: vga=screen31

Hit the enter key and the OS will boot. You will see a black screen with a number of options on it. Hit enter again at this screen in order to view the display modes available to us.

screen32

From the list of available modes, choose the one you want to use. The system is waiting for you to type in the three character hex code for the mode you wish to use. For 1024x768 at 32 bit, for example, enter 318

screen33

Assuming all works correctly  you will see Android running in your chosen resolution. Sadly, it’s not permanent yet. I’ve also become paranoid enough that before I edit the bootloader options permanently I like to try what I’m going to do first.

Reboot the system and hit ‘a’ to append boot options. This time we want to specify the display option we want to use. Just to bend your head a little, the boot option needs the decimal equivalent of the hex value that the display modes screen showed us. For Our 1024x768x32, the hex was 318. The decimal is 792, so we append vga=792 to the boot options.

screen34

When Android boots, you should see it in 1024x768 once more:

screen35

Now we need to make the change permanent. To do that we need to edit the configuration file that the Grub bootloader uses.

To do that we need to reboot the system in debug mode.

Boot the system and use the cursor keys to select the second option on the boot menu.

screen36

The system will boot to a command shell:

screen37

Once you’re in the command prompt, typing clear will clear the screen and get rid of the boot messages. Then you need to enter the following commands:

cd /

mount –o remount,rw /mnt

cd /mnt/grub

vi menu.lst

What does that lot do? The part of the filesystem that stores the bootloader is attached as read-only. The mount command effectively detaches and reattaches that part of the filesystem so we can modify it. The files we want are in the grub folder within mnt. Finally, we open the text editor Vi to change the file.

Vi is a bit arcane, although extremely powerful. For help with the commands look at online tutorials, like the one hosted by Washington University.

Once we’re in the config file we are going to add the vga=792 option to the end of the default boot command. I’ll tell you what Vi commands I use to get the job done – note that they are not necessarily the best ones, they just work for me. I know about half a dozen Vi commands and they allow me to get by. If I want to do something clever I have to look it up!

screen38

In Vi, the cursor keys allow you to move around the file. Pressing escape tells Vi to listen for commands. Move down to the start of the first line of the first boot section (the first occurrence of ‘kernel’). Press Esc then ‘o’. That should give you a new line after kernel.

screen39

Now use the cursor keys to navigate to the end of that first ‘kernel…’ line and you should be able to type ‘ vga=792’

screen40

Now we want to get rid of that extra line. Move the cursor to the start of it and hit Esc then dd (escape then hit ‘d’ twice).

Finally we save the file. Esc+:wq is the command to write out the file and quit.

screen41

You should now find yourself back at the command prompt. Type reboot –f to reboot the system.

You should now find that by default your Android VM boots into your chosen resolution.

A quick side note

If you don’t have control over your own DHCP server you can use the following command to poke the dns into life:

setprop net.dns1 x.x.x.x where x.x.x.x is the IP address of your DNS. You can also add a second with net.dns2.

You can also give the VM more memory with no issues – mine now runs with 1024Mb. I’ve also added a second CPU core as an experiment which works but I’m not sure it’s any quicker.

IT Camp Leeds Roundup

WP_000140 

Yesterday was great fun and I was really pleased to see so many Black Marble event regulars at the IT Camp. It was great to hear so many requests for more events like it in Leeds. We’re all keen to run more, but we need people to attend and give us feedback in order to be able to do that.

I hope those of you who were there took away useful knowledge from the event. Andy and Simon were very keen that it should not be a day of PowerPoint and canned demos and we certainly delivered that. Did we have technical issues that meant we had to change plans on the fly? Sure! Certainly nobody we spoke to seemed to mind. All of us from Black Marble thought the concept for the day – one of interaction, audience participation and trying to build systems on the fly – should be fun and we thought it was.

I believe that the TechNet UK folks were tweeting and posting links to some of the things we talked about yesterday but I thought it would do no harm to round some of them up here.

  • When we were talking about configuring remote management of hyper-v servers I mentioned HVRemote. This is a script written by John Howard that has been really useful for Andy and myself in the past. John’s blog has lots of really useful information about hyper-v and management, although he’s not posted for a while now.
  • Also a great source of information on hyper-v and virtualisation is Ben Armstrong (VPC-Guy).
  • The Virtualisation Team Blog is a good place for product info, announcements and knowledge.
  • Richard Fennell posts regularly on Lab Manager, which builds on Hyper-V and SCVMM to deliver great things for your dev and test teams. I thought either he or Andy had blogged about how we got Lab Manager 2010 working with a hyper-v cluster but it appears not. We’ll see if we can get something written up in that space.
  • Core Configurator  (currently at version 2.0) was something that was shown as a handy tool to control some of the settings on your Hyper-V server.
  • The Microsoft iSCSI Target is a free download. The Virtualisation Team blogged on it’s release.
  • For those of you who played with the Surface, we have some videos on YouTube of the Retail, Concierge and O7 game that were filmed at NRF 2012.

I’m really looking forward to other camps. Andy and Simon want to keep the hands-on approach so you can look forward to playing with an installed SCVMM solution in the follow-up virtualisation camp, and the consumerisation of IT camp should be wild as we try to cover how IT pros can deal with the variety of devices that our staff (and our bosses) want to use!

As always, the page of details about the events is here!

Server Core, Hyper-V and VLANs: An Odyssey

A sensible plan

This is a torrid tale of frustration and annoyance, tempered by the fun of digging through system commands and registry entries to try and get things working.

We’ve been restructuring our network at Black Marble. The old single subnet was creaking and we were short of addresses so we decided to subnet with network subnets for physical, virtual internal and virtual development servers, desktops, wifi etc. We don’t have a huge amount of network equipment, and we needed to put virtual servers hosted on hyper-v on separate networks so we decided to use VLANs.

Our new infrastructure has one clever switch that can generate all the VLANs we need, link those VLANs to IP subnets and provide all the routing between them. By doing it this way we can present any subnet to any port on any switch with careful configuration and use of the 802.1Q VLAN standard. Hyper-V servers can have a single physical interface with traffic from multiple VLANs flowing across it to the virtual switch, with individual VMs assigned to specific VLANs.

We did the heavy lifting of the network move without touching our Hyper-V cluster, placing all the NICs of all the servers on the VLAN corresponding to our old IP subnet. We then tested VLANs over the virtual switch in Hyper-V using a separate server and made sure we knew how to configure the switch and Hyper-V to make it all work.

Then we came to the cluster. Running Windows 2008 R2 Server Core.

Since we built the cluster Andy and I have come to decide that if we ever rebuild it, server core will not be used. It’s just too darn hard to configure when you really need to, and this is one of those times.

A tricky situation

Before we began to muck around with the VLAN settings, we needed to change the default gateway that the servers used. The old default gateway was the address of our ISA (now a shiny TMG) server. That box is still there, but now we have the router at the heart of the network, whose address is the new default gateway.

To change the default gateway on server core we need a command line tool. Enter Netsh, stage left.

We first need to list the interfaces so we know what we’re doing. IPConfig will list the interfaces and their IP settings. Old lags will no doubt abbreviate the netsh commands that we need next but I’ll write them out in full so they make sense.

Give me a list of the physical network adapters and their connection status: netsh interface show interface

Show me the IPV4 interfaces: netsh interface ipv4 show interface

To change the default gateway we must issue a set command with all the IP settings – just entering the gateway will not work as all the current settings get wiped first:
netsh interface ipv4 set address name="<name>" source=static address=x.x.x.x mask=255.255.255.0 gateway=x.x.x.x
Where <name> is the name shown in the IPV4 interface list, which will match the one shown in the ipconfig output that you want to change the gateway for. We’re using a class C subnet structure – your network mask may vary.

It’s worth pointing out that we stopped the cluster service on the server whilst we did this (changing servers one by one so we kept our services runnning).

We had two interfaces to change. Once corresponded to the NIC used to manage the server and the other corresponded to the one used by the virtual switch for Hyper-V. That accounted for two of the four NICs on our Sun X2200-M2’s, with the SAN iSCSI network taking a third. The SAN used a Broadcom, the spare was the other Broadcom and the others used each of the two nVidia NICs on the Sun (that will become important shortly).

A sudden problem

Having sorted the IP networking our next step was to sort out the VLAN configuration. To do that we changed the switch port that the NIC hosting the hyper-V virtual switch was connected to from being an untagged member of only our server subnet VLAN to being a tagged member of that VLAN and a tagged member of the new VLAN corresponding to our subnet for virtual internal servers.

The next step was to set the VLAN id for a test VM (we could ignore the host as it doesn’t share the virtual switch – it has it’s own dedicated NIC).

The snag was, the checkbox to enable VLAN ids was disabled when we looking in Hyper-V manager, both for the virtual switch and for the NIC in the VM.

Some investigation and checking of our test server showed that the physical network driver had a setting, Priority and VLAN, that needed to be set to enable priority and VLAN tagging of traffic, and that the default state was priority only. On full server that’s a checkbox in the driver setting. On server core…?

So, first of all we tried to find the device itself. Sadly, the server decided that remove management of devices from another server wasn’t going to be allowed, despite reporting that it should be. So we searched for command line tools.

To query the machine so it lists visible hardware devices: sc query type= driver (note the space before 'driver')

That will give you a list, allowing you to find the network device name.

For our server, that came back with nvenetfd – the nVidia NIC.

To use that name and find the file responsible: sc qc <device name> (in our case nvenetfd )

That returned the nvm62x64.sys driver file. Nothing about settings, but it allowed us to check the driver versions. Hold that thought, I’ll come back to it shortly.

Meanwhile

We’d also been poking at the test server, looking at the NIC settings. Logic suggested that the settings should be in the registry – all we have to do was find them.

I’ll save you the hunt:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}

That key holds all the Network Adapters. There are keys beneath that are numbered (0000, 0001, etc). The contents of those keys enabled us to figure out which key matched which adapter. Looking at the test server and comparing it to the server- core hyper-v box we found a string value call *PriorityVlanTag which had a value of 3 on the test server (priority and vlan enabled) and 1 on the hyper-v box. We set the hyper-v box to 3. Nothing. No change. We rebooted. Still nothing.

Then we noticed that in the key for the NIC there was a subkey: \Ndi\params. In there was a key called *PriorityVlanTag. In that key were settings listing the options that were displayed in the GUI settings dialog, along with the value that got set. For the nVidia the value was 2, not 3. We duly changed the value and tried again. Nothing.

So we decided to update the drivers. This brings us back to where I left us earlier with the sc command.

To update a driver on server core, you need to unpack the driver files into a folder and then run the following:
pnputil –i –a <explicit path to driver inf file>

After failing to get any other drivers to install it looked like we had the latest version and the system was not letting go. So we did some more research on the internet (what did we ever do before the internet?).

It transpires, for those of you with Sun servers, that the nVidia cards appear not to support VLAN ids on traffic, despite having all the settings to suggest that they do.

Darn.

A way forward

Fortunately we have a spare broadcom on each of our hyper-v host servers. We are now switching the virtual switch binding from the nVidia to the broadcom on each of our servers. We didn’t even have to hack around with registry settings once we did that the VLAN id settings in Hyper-V simply sprang into life.

The moral of this story is that if you want to use VLAN id’s with Hyper-V and your server has nVidia network adapters (and certainly if it’s a Sun X2200-M2) then stop now before you lose your hair. You need to use another NIC, if you have one, or install one if you don’t. Hopefully, however, the command line tools and registry keys above will help other travellers who find themselves in a similar situation to ourselves.

Unable to remote control Hyper-V VM after installing SharePoint 2010 on Windows 7

True to form, you only discover something isn’t working when you’re in a desperate hurry. We use lots of Hyper-V VMs here at Black Marble and they are mostly running on our four node cluster. I use Failover Cluster Manager and this morning I couldn’t connect remotely to any of the Hyper-V VMs. I kept getting an error:

Virtual Machine Connection:
A connection will not be made because credentials may not be sent to the remote computer. For assistance, contact your system administrator.
Would you like to try connecting again?

A quick search suggested that the credssp settings on the host servers were broken. A quick test showed that they weren’t – the problem was local to my machine.

The only thing I had changed recently (try yesterday!) was to install SharePoint 2010 on my workstation. OK, I’ll be fair – that means a whole load of pre-requisites, so it’s not that simple!

I decided to check my machine and look at the settings which had been suggested as being wrong on the hyper-v servers. Sure enough, my workstation now had the credssp elements and sure enough, they didn’t match the example I’d found.

So if you get the same problem, copy the text below into a .reg file and import it into your registry. It should fix the problem.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowDefaultCredentials]
"Hyper-V"="Microsoft Virtual Console Service/*"
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowDefaultCredentialsDomain]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowFreshCredentials]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowFreshCredentialsDomain]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowFreshCredentialsWhenNTLMOnly]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowFreshCredentialsWhenNTLMOnlyDomain]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowSavedCredentials]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowSavedCredentialsDomain]
"Hyper-V"="Microsoft Virtual Console Service/*"

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Credssp\PolicyDefaults\AllowSavedCredentialsWhenNTLMOnly]
"Hyper-V"="Microsoft Virtual Console Service/*"