Having automated builds is essential to any good development process. Irrespective of the build engine in use, VSTS, Jenkins etc. you need to have a means to create the VMs that are running the builds.
You can of course do this by hand, but in many ways you are just extending the old ‘it works on my PC – the developer can build it only on their own PC’ problem i.e. it is hard to be sure what version of tools are in use. This is made worse by the fact it is too tempting for someone to remote onto the build VM to update some SDK or tool without anyone else’s knowledge.
In an endeavour to address this problem we need a means to create our build VMs in a consistent standardised manner i.e a configuration as code model.
At Black Marble we have been using Lability to build our lab environments and there is no reason we could not use the same system to create our VSTS build agent VMs
- Creating base VHDs disk images with patched copies of Windows installed (which we update on a regular basis)
- Use Lability to provision all the required tools – this would need to include all the associated reboots these installers would require. Noting that rebooting and restarting at the correct place, for non DSC based resources, is not Lability’s strongest feature i.e. you have to do all the work in custom code
However, there is an alternative. Microsoft have made their Packer based method of creating VSTS Azure hosted agents available on GitHub. Hence, it made sense to me to base our build agent creation system on this standardised image; thus allowing easier migration of builds between private and hosted build agent pools whether in the cloud or on premises, due to the fact they had the same tools installed.
The Basic Process
To enable this way of working I forked the Microsoft repo and modified the Packer JSON configuration file to build Hyper-V based images as opposed to Azure ones. I aimed to make as few changes as possible to ease the process of keeping my forked repo in sync with future changes to the Microsoft standard build agent. In effect replacing the builder section of the packer configuration and leaving the providers unaltered
So, in doing this I learnt a few things
Which ISO to use?
Make sure you use a current Operating System ISO. First it save time as it is already patched; but more importantly the provider scripts in the Microsoft configuration assume certain Windows features are available for installation (Containers with Docker support specifically) that were not present on the 2016 RTM ISO
Building an Answer.ISO
In the sample I found for the Packer hyperv-iso builder the AutoUnattended.XML answers file is provided on an ISO (as opposed to a virtual floppy as floppies are not support on Gen2 HyperV VMs). This means when you edit the answers file you need to rebuild the ISO prior to running Packer.
The sample script to do this has lines to ‘Enable UEFI and disable Non EUFI’; I found that if these lines of PowerShell were run the answers file was ignored on the ISO. I had to comment them out. It seems an AutoUnattended.XML answers file edited in VSCode is the correct encoding by default.
I also found that if I ran the PowerShell script to create the ISO from within VSCode’s integrated terminal the ISO builder mkisofs.exe failed with an internal error. However, it worked fine from a default PowerShell windows.
Installing the .NET 3.5 Feature
When a provider tried to install the .NET 3.5 feature using the command
Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature
Seems this is a bug in Windows 2016 and the workaround is to specify the –Source location on the install media
Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature -Source “D:sourcessxs”
Once the script was modified in this manner it ran without error
Well how long does it take?
The Packer process is slow, Microsoft say for an Azure VM it can take up to over 8 hours. A HyperV VM is no faster.
I also found the process a bit brittle. I had to restart the process a good few times as….
- I ran out of disk space (no unsurprising this broke the process)
- The new VM did not get a DHCP assigned IP address when connected to the network via the HyperV Default Switch. A reboot of my HyperV host PC fixed this.
- Packer decided the VM had rebooted when it had not – usually due to a slow install of some feature or network issues
- My Laptop went to sleep and caused one of the above problems
So I have a SysPrep’d VHD now what do I do with it now?
At this point I have options of what to do with this new exported HyperV image. I could manually create build agent VM instances.
However, it appeals to me to use this new VHD as a based image for Lability, replacing our default ‘empty patched Operating System’ image creation system, so I have a nice consistent way to provision VMs onto our Hyper-V servers.