When software attacks!

Thoughts and musings on anything that comes to mind

Notes from the field: Using Hyper-V Nat Switch in Windows 10

The new NAT virtual switch that can be created on Windows 10 for Hyper-V virtual machines is a wonderful thing if you're an on-the-go evangelist like myself. For more information on how to create one, see Thomas Maurer's post on the subject.

This post is not about creating a new NAT switch. It is, however, about recreating one and the pitfalls that occur, and how I now run my virtual environment with some hack PowerShell and a useful DHCP server utility.

Problems Creating Nat Switch? Check Assigned IP Addresses

I spent a frustrating amount of time this week trying to recreate a NAT switch after deleting it. Try as I might, every time I executed the command to create the new switch it would die. After trial and error I found that the issue was down to the address range I was using. If I created a new switch with a new address range everything worked, but only that one time: If I deleted the switch and tried again, any address range that I'd used would fail.

This got me digging.

I created a new switch with a new address range. The first thing I noticed was that I had a very long routing table. Get-netroute showed routes for all the address ranges I had previously created. That let me to look at the network adapter created by the virtual switch. When you create a new nat switch the resulting adapter gets the first IP address in the range bound to it (so 192.168.1.0/24 will result in an IP of 192.168.0.1). My adapter had an IP address for every single address range I'd created and then deleted.

Obviously, when the switch is removed the IP configuration is being stored by windows somewhere. When a new switch is created all that old binding information is reapplied to the new switch. I'm not certain whether this is related to the interface index, name or what, since when I remove and re-add the switch on my machine it always seems to get the same interface index.

A quick bit of PowerShell allowed me to rip all the IP addresses from the adapter at once. The commands below are straightforward. The first allows me to find the adapter by name (shown in the Network Connections section of control panel) - replace the relevant text with the name of your adapter. From that I can find the interface index, and the second command gets all the IPv4 addresses (only IPv4 seems to have the problem here) and removes them from the interface - again, swap your interface index in here. I can then use PowerShell to remove the VMswitch and associated NetNat object.

Get-NetAdapter -Name "vEthernet (NATSwitch)"
Get-NetIPAddress -InterfaceIndex 13 -AddressFamily IPv4 | Remove-NetIPAddress

Once that's done I can happily create new virtual switches using NAT and an address range I've previously had.

Using DHCP on a NAT switch for ease

My next quest was for a solution to the IP addressing conundrum we all have when running VMs: IP addresses. I could assign each VM a static address, but then I have to keep track of them. I also have a number of VMs in different environments that I want to run and I need external DNS to work. DHCP is the answer, but Windows 10 doesn't have a DHCP server and I don't want to build a VM just to do that.

I was really pleased to find that somebody has already written what I need: DHCP Server for Windows. This is a great utility that can run as a service or as a try app. It uses an ini file for configuration and by editing the ink file you can manage things like address reservations. Importantly, you can choose which interface the service binds to which means it can be run only against the virtual network and not a use issues elsewhere.

There's only one thing missing: DNS. Whilst the DHCP serer can run it's own DNS if you like, it still has a static configuration for the forwarder address. In a perfect world I'd like to be able to tell it to had my PCs primary DNS address to clients requesting an IP.

Enter PowerShell, stage left...

Using my best Google-fu I tracked down a great post by Lee Homes from a long time ago about using PowerShell to edit ini files through the old faithful Windows API calls for PrivateProfileString. I much prefer letting Windows deal with my config file than write some complex PowerShell parser.

I took Lee's code and created a single PowerShell module with three functions as per his post which I called Update-Inifiles.psm1. I then wrote another script that used those functions to edit the ini file for DHCPserver.

It's dirty and not tested on anything but my machine, but here it is:

import-module C:\src\Update-IniFiles.psm1

$dnsaddr = (Get-DnsClientServerAddress -InterfaceIndex (get-netroute -DestinationPrefix 0.0.0.0/0)[0].ifIndex -AddressFamily IPv4).ServerAddresses[0]

if ($dnsaddr.Length -gt 0)
{
Set-PrivateProfileString "C:\Program Files\DHCPSrv\dhcpsrv.ini" GENERAL DNS_0 $dnsaddr
}
else
{
Set-PrivateProfileString "C:\Program Files\DHCPSrv\dhcpsrv.ini" GENERAL DNS_0 8.8.8.8
}

The second line is the one that may catch you out. It gets the DNS server information for the interface that is linked to the default IPv4 route. On my machine there are multiple entries returned by the get-netroute command, so I grab the first one from the array. Similarly, there are multiple DNS servers returned and I only want the first one of those, too. I should really expand the code and check what's returned, but this is only for my PC - edit as you need!

Just in case I get nothing back I have a failsafe which is to set the value to the Google public DNS server on 8.8.8.8.

Now I run that script first, then start my DHCP server and all my VMs get valid IP information and can talk on whatever network I am connected to, be it physical or wireless.


Unblocking a stuck Lab Manager Environment (the hard way)

This is a post so I don’t forget how I fixed access to one of our environments yesterday, and hopefully it will be useful to some of you.

We have a good many pretty complex environments deployed to our lab hyper-V servers, controlled by Lab manager. Operations such as starting, stopping or repairing those environments can take a long, long time, but this time we had one that was quite definitely stuck. The lab view showed the many servers in the lab with green progress bars about halfway across but after many hours we saw no progress. The trouble is, at this point you can’t issue any other commands to the environment from within the Lab Manager console – it’s impossible to cancel the operation and regain access to the environment.

Normally in these situations, stepping from Lab Manager to the SCVMM console can help. Stopping and restarting the VMs through SCVMM can often give lab manager the kick it needs to wake up. However, this time that had no effect. We then tried restarting the TFS servers to see if they’d got stuck, but that didn’t help either.

At this point we had no choice but to roll up our sleeves and look in the TFS database. You’d be surprised (or perhaps not) at how often we need to do that…

First of all we looked in the LabEnvironment table. That showed us our environment, and the State column contained a value of Repairing.

Next up, we looked in the LabOperation table. Searching for rows where the DataspaceId column value matched that of our environment in the LabEnvironment table showed a RepairVirtualEnvironment operation.

In the tbl_JobSchedule table we found an entry where the JobId column matched the JobGuid column from the LabOperation table. The interval on that was set to 15, from which we inferred that the repair job was being retried every fifteen minutes by the system. We found another entry for the same JobId in the tbl_JobDefinition table.

Starting to join the dots up, we finally looked in the LabObject database. Searching for all the rows with the same DataspaceId as earlier returned all the lab hosts, environments and machines that were associated with the Team Project containing the lab. In this table, our environment row had a PendingOperationId which matched that of the row in the LabOperation table we found earlier.

We took the decision to attempt to revive our stuck environment by removing the stuck job. That would mean carefully working through all the tables we’d explored and deleting the rows, hopefully in the correct order. As the first part of that, we decided to change the value of the State column in the LabEnvironment table to Started, hoping to avoid crashing TFS should it try to parse all the information about the repair job we were about to slowly remove.

Imagine our surprise, then, when having made that one change, TFS itself cleaned up the database, removed all the table entries referring to the repair environment job and we were immediately able to issue commands to the environment again!

Net Writer: A great UWP blog editor

I came across Net Writer some months ago, when it's creator, Ed Anderson blogged about how he'd taken the newly-released Open Live Writer code and used it in his just-started Universal Windows Platform (UWP) app for Windows 10. In January it only supported blogger accounts, which meant that I was unable to use it. However, I checked again this weekend and discovered that it now supports a wide range of blog software including BlogEngine.net that powers blogs.blackmarble.co.uk.

I'm writing this post using the app. It's great for quick posts (there's no plugin support so posting code snippets is tricky) and most importantly, it works on my phone! That's the big win as far as I'm concerned. I've been hankering for the ability to easily manage my blog form my phone for a long time and now I can.

You can find Net Writer in the Windows Store and learn more about it at Ed's blog.

My Resource Templates from demos are now on GitHub

I’ve had a number of people ask me if I can share the templates I use in my Resource Template sessions at conferences. It’s taken me a while to find the time, but I have created a repo on GitHub and there is a new Visual Studio solution and deployment project with my code.

One very nice feature that this has enabled me to provide is the same ‘Deploy to Azure’ button as you’ll find in the Azure Quickstart Templates. This meant a few changes to the templates – it turns out that Github is case sensitive for file requests, for example, whilst Azure Storage isn’t. The end result is that you can try out my templates in your own subscription directly from Github!

Installing Windows 10 RSAT Tools on EN-GB Media-Installed Systems

This post is an aide memoir so I don’t have to suffer the same annoyance and frustration at what should be an easy task.

I’ve now switched to my Surface Pro 3 as my only system, thanks to the lovely new Pro 4 Type Cover and Surface Dock. That meant that I needed the Remote Server Administration Tools installing. Doing that turned out to be much more of an odyssey that it should have been and I’m writing this in the hope that it will allow others to quickly find the information I struggled to.

The RSAT tools download is, as before, a Windows Update that adds the necessary Windows Features to your installation. The trouble is, that download is EN-US only (really, Microsoft?!). If, like me, you used the EN-GB media to install you’re in a pickle.

Running the installed appears to work – it proceeds with no errors, albeit rather quickly – but the RSAT features were unavailable. I already had a US keyboard on my config (my pro keyboard is US), but that was obviously not enough. I added the US language, but still couldn’t get the installer to work.

I got more information on the problem by following the steps described in a TechNet article on using DISM to install Windows Updates. That led me to a pair of articles on the SysadminTips site about the installation problem, and how to fully add the US language pack to solve it.

It turns out that the EN-GB media doesn’t install the full US-EN language pack files, so when you add the US language it doesn’t add enough stuff into the OS to allow the RSAT tools. Frankly, that’s a mess and I hope Microsoft deal with the issue by releasing multi-language RSAT tools.

Optimising IaaS deployments in Azure Resource Templates

Unlike most of my recent posts this one won’t have code in it. Instead I want to talk about concepts and how you should look long and hard at your templates to optimise deployment.

In my previous articles I’ve talked about how nested deployments can help apply sensible structure to your deployments. I’ve also talked about things I’ve learned around what will successfully deploy and what will give errors. Nested deployments are still key, but the continuous cycle of improvements in Azure means I can change my information somewhat around what works well and what is likely to fail. Importantly, that change allows us to drastically improve our deployment time if we have lots of virtual machines.

I’d previously found that unless I nested the extensions for a VM within the JSON of the virtual machine itself, I got lots of random deployment errors. I am happy to now report that situation has improved. The result of that improvement is that we can now separate out the extensions deployed to a virtual machines from the machine itself. That separates the configuration of the VM, which for complex environments almost certainly has a prescribed sequence, from the deployment of  the VM, which almost certainly doesn’t.

To give you a tacit example, in the latest work at Black Marble we are deploying a multi-server environment (DC, ADFS, WAP, SQL, BizTalk, Service Bus and two IIS servers) where we deploy the VMs and configure them. With my original approach, hard-fought to achieve a reliable deploy, each VM was pushed and fully configured in the necessary sequence, domain controller first.

With our new approach we can deploy all eight VMs in that environment simultaneously. We have moved our DSC and Custom Script extensions into separate resource templates and that has allowed some clever sequencing to drastically shorten the time to deploy the environment (currently around fifty minutes!).

We did this by carefully looking at what each step was doing and really focusing on the dependencies:

  • The domain controller VM created a new virtual machine. The DSC extension then installed domain services and certificate services and created the domain. The custom script then created some certificated.
  • The ADFS VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured ADFS.
  • The WAP VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured the proxy for the configured ADFS service.

Hopefully you can see what we saw: Each machine had three phases of configuration and the dependencies were different, giving us three separate sequences:

  1. The VM creations are completely independent. We could do those in parallel to save time.
  2. The DSC configuration for the DC has to be done first, to create the domain. However, the ADFS and WAP servers have DSC that are independent, so we could do those in parallel too.
  3. The custom script configurations have a definite sequence (DC – ADFS – WAP) and the DC script depends on the DC having run it’s DSC configuration first so we have our certificate services.

Once we’ve identified our work streams it’s a simple matter of declaring the dependencies in our JSON.

Top tip: It’s a good idea to list all the dependencies for each resource. Even though the Azure Resource Manager will infer the dependency chain when it parses the template, it’s much easier for humans to look at a full list in each resource to figure out what’s going on.

The end result of this tinkering? We cut our deployment time in half. The really cool bit is that adding more VMs doesn’t add much time to our deploy as it’s the creation of the virtual machines that tends to take longest.

Convert new VM’s dynamic IP address to static with Azure Resource Templates

Over the past few posts on this blog I’ve been documenting the templates I have been working on for Black Marble. In a previous sequence I showed how you can use nested deployments to keep your templates simple and still push out complex environments. The problem with those examples is that they are very fixed in what they do. The templates create a number of virtual machines on a virtual network, with static IP addresses for each machine.

This works well for that deployment, where I have complete control. However, one of my aims is to create a series of templates for virtual machines that my developers can combine themselves to create environments that may be very different in makeup to my original. For example, what if the dev needs more servers? What if they only realise after pushing out four web servers that they need a domain controller? If I can’t guarantee the number or sequence of my servers I can’t use static address on creation.

The answer to this problem is actually really simple and uses the same approach as I described previously when reconfiguring a virtual network to alter the DNS address. I deploy a new virtual machine where the virtual nic for that machine requests a dynamic IP address. I then use a nested deployment to reconfigure that same nic, setting the address type to static and specifying the IP address that it was just given as the intended address. This means that I no longer care what order the Azure fabric creates the virtual machines in. That one key change over my previous template approach has halved the deployment time as I can now create all machines in parallel (the bit that takes the most time) and then configure in sequence as needed.

The markup to do this is very straightforward. First we create our nic:

{
    "name": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
    "type": "Microsoft.Network/networkInterfaces",
    "location": "[parameters('VirtualNetwork').Location]",
    "apiVersion": "2015-06-15",
    "dependsOn": [
    ],
    "tags": {
        "displayName": "DomainControllerNic"
    },
    "properties": {
        "ipConfigurations": [
            {
                "name": "ipconfig1",
                "properties": {
                    "privateIPAllocationMethod": "Dynamic",
                    "subnet": {
                        "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
                    }
                }
            }
        ]
    }
}
You can see that I have set privateIPAllocationMethod to Dynamic.

The we call a nested deployment from our template, passing the IP address of the nic as a parameter. That template will redefine the settings of the nic, so it’s important we pass in all the information we need. If I miss something, that setting will be removed from the nic, so it’s important to be careful here. Notice that I use the reference keyword to access the privateIPAddress address property of the nic.

{
    "name": "SetStaticIP",
    "type": "Microsoft.Resources/deployments",
    "apiVersion": "2015-01-01",
    "dependsOn": [
        "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
        "[concat(parameters('envPrefix'),parameters('vmName'))]",
        "Microsoft.Insights.VMDiagnosticsSettings"
    ],
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[concat(parameters('_artifactsLocation'), '/SetStaticIP.json', parameters('_artifactsLocationSasToken'))]",
            "contentVersion": "1.0.0.0"
        },
        "parameters": {
            "VirtualNetwork": {
                "value": "[parameters('VirtualNetwork')]"
            },
            "VirtualNetworkId": {
                "value": "[parameters('VirtualNetworkId')]"
            },
            "nicName": {
                "value": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]"
            },
            "ipAddress": {
                "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
            }
        }
    }
}

Within the template called by my nested deployment object I use the incoming parameters to reconfigure the nic. I need to change the privateIPAllocationMethod setting to static and pass in the IP address from my parameters.

{
  "name": "[parameters('nicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('VirtualNetwork').Location]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "tags": {
    "displayName": "DomainControllerNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[parameters('ipAddress')]",
          "subnet": {
            "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
          }
        }
      }
    ]
  }
}

Finally, in my virtual machine template I pass the IP address back up the chain as an output so I can use it in other templates if needed (for example, to reconfigure the vNet DNS property with the IP address of my domain controller).

{
  "name": "[parameters('nicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('VirtualNetwork').Location]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "tags": {
    "displayName": "DomainControllerNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[parameters('ipAddress')]",
          "subnet": {
            "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
          }
        }
      }
    ]
  }
}

Using References and Outputs in Azure Resource Templates

As you work more with Azure Resource Templates you will find that you need to pass information from one resource you have created into another. This is fine if you had the information to begin with within your variables and parameters, but what if it’s something you cannot know before deploy, such as the dynamic IP address of your new VM, or the FQDN of your new public IP address for your service?

The answer is to use References to access properties of other resources within your template. However, if you need to get information between templates then you also need to look at outputs.

A crucial tool in this process is the Azure Resource Explorer (also now available within the Azure Portal – click Browse and look for Resource Explorer) because most often you will need to look at the JSON for your provisioned resource in order to find the specific property you seek.

In the JSON below I am passing the value of the current IP address of the NIC attached to a virtual machine into a nested template as a parameter.

"ipAddress": {
    "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
}

The markup looks complex but isn’t really. The concat bit is building the name of the resource, which I do based on parameters within the resource template. Basically, you specify reference in the same way as you would variable or parameter. You then need to provide the name of the resource you want to reference (the concat markup here, but it could just be ‘mynic’) and then the property you want, using dot notation to work your way down the object tree.

I’ve used the example above for a reason because it covers all the bases you might hit:

  1. When you look at the JSON for the deployed resource you will see a properties section (just as you do in your template). You don’t need to include this in your reference (i.e. mynic.<the property I want>, not mynic.properties.<the property I want>).
  2. My nic can have multiple IP assignments – ipConfigurations is an array – so I am using [0] to look in the first item in that array.
  3. Within the ipConfiguration is another properties object. This time I need to include it in the markup.
  4. Within the properties of the ipConfiguration is an attribute called privateIPAddress, so I specify this.

It is important to remember that I can only use reference to access resources defined within my current template.

So what if I want to pass a value back out of my current template to the one I called it with? That’s what the Outputs section of my template is for, and by and large everything in there will be a reference to a property of a resource the current template has deployed. In the code below I am passing the same IP address back out of my template:

"outputs": {
    "ipAddress": {
        "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]",
        "type": "string"
    }
}

Within my parent template I access that output by using the reference keyword again, this time referencing an output from the template resource. In the example below I am passing the IP address from my domain controller template into another nested deployment that will reconfigure my virtual network.

"parameters": {
    "VirtualNetwork": {
        "value": "[variables('VirtualNetwork')]"
    },
    "DNSaddress": {
        "value": "[reference('DomainController').outputs.ipAddress.value]"
    }
}
Note that this markup requires me to specify .value on the end of the reference to pass the information correctly.

References and outputs are important because they allow you to pass information between resources and nested deployments. They allow you to keep your variable count low and understandable, and your templates small and well defined with nested deployments for complex environments.

Using Objects in Azure Resource Templates

Over the past few weeks I’ve been refactoring and improving the templates that I have been creating for Black Marble to deploy environments in Azure. This is the first post of a few talking about some of the more advanced stuff I’m now doing.

You will remember from my previous posts that within an Azure Resource Template you can define parameters and variables, then use those for the configuration values within your resources. I was finding after a while that the sheer number of parameters and variables I had made the templates hard to read and understand. This was particularly true when my colleagues started to work with thee templates.

The solution I decided on was to collect individual parameters and variables into objects. These allow structures of information to be passed into and within a template. Importantly for me, this approach significantly reduces the number of items listed within the variables and parameters sections of my template, making them easier to read and understand.

Creating objects within the JSON is easy. You can simply declare variables within a hierarchy in your JSON. This is similar to using arrays, but each property can be individually references. Below is a sample from the variables section of my current deployment template:

"VirtualNetwork": {
   "Name": "[concat(parameters('envPrefix'), 'network')]",
   "Location": "[parameters('envLocation')]",
   "Prefix": "192.168.0.0/16",
   "Subnet1Name": "Subnet-1",
   "Subnet1Prefix": "192.168.1.0/24"
},

When passing this into a nested deployment I can simply push the entire object via the parameters block of the nested deployment JSON:
"parameters": {
    "VirtualNetwork": {
        "value": "[variables('VirtualNetwork')]"
    },
    "StorageAccount": {
        "value": "[variables('StorageAccount')]"
    }
}

Within the target template I declare the parameter to be of type Object:

"VirtualNetwork": {
  "type": "object",
  "metadata": {
    "description": "object containing virtual network params"
  }
}

Then to reference an individual property I specify it after the parameter itself using dot notation for the hierarchy of properties:

"subnets": [
  {
    "name": "[parameters('VirtualNetwork').Subnet1Name]",
    "properties": {
      "addressPrefix": "[parameters('VirtualNetwork').Subnet1Prefix]"
    }
  }
]
The end result is a much better structure to my templates, where I am passing blocks of related information around. It’s easier to read, understand and debug.

Useful links from The ART of Modern Azure Deployments

Within a few days of each other I spoke about Azure Resource Templates at both DDDNorth 2015 and Integration Mondays run by the Integration User Group. I’d like to thank all of you who attended both and have been very kind in your feedback afterwards.

As promised, this post contains the useful links from my final slide.

I’ve already written posts on much of the content covered in my talk. However, since I’m currently sat on a transatlantic flight you can expect a series of posts to follow this on topics such as objects in templates, outputs and references.

If you missed my Integration Monday session, the organisers recorded it and you can watch it online.

Azure PowerShell 1.0 Preview
https://azure.Microsoft.com/en-us/blog/azps-1-0-pre/

Azure QuickStart Templates
https://github.com/Azure/azure-quickstart-templates
http://azure.microsoft.com/en-us/documentation/templates/

ARM Template Documentation
https://msdn.microsoft.com/en-us/library/azure/dn835138.aspx

Azure Resource Explorer
https://resources.azure.com/

“Azure Resource Manager DevOps Jumpstart”
https://www.microsoftvirtualacademy.com/en-US/training-courses/azure-resource-manager-devops-jump-start-8413