Using VSTS Gates to help improve my deployment pipeline of VSTS Extensions to the Visual Studio Marketplace

My existing VSTS CI/CD process has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using VSTS Release Gates; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests.

The problem is how to achieve this with the current VSTS gate options?

What did not work

My first thought was to use the Invoke HTTP REST API gate, calling the VSTS API https://<your vsts instance name>.visualstudio.com/_apis/distributedtask/tasks/<GUID of Task>. This API call returns a block of JSON containing details about the deployed task visible to the specified VSTS instance. In theory you can parse this data with a JSONPATH query in the gates success criteria parameter to make sure the correct version of the task is deployed e.g. eq($.value[?(@.name == “BuildRetensionTask”)].contributionVersion, “1.2.3”)

However, there is a problem. At this time the Invoke HTTP REST API gate task does not support the == equality operator in it’s success criteria field. I understand this will be addressed in the future, but the fact it is currently missing is a block to my current needs.

Next I thought I could write a custom VSTS gate. These are basically ‘run on server’ tasks with a suitably crafted JSON manifest. The problem here is that this type of task does not allow any code (Node.JS or PowerShell) to be run. They only have a limited capability to invoke HTTP APIs or write messages to service bus. So I could not implement the code I needed to process the API response. So another dead end.

What did work

The answer, after a suggestion from the VSTS Release Management team at Microsoft, was to try the Azure Function gate.

To do this I created a new Azure Function. I did this using the Azure Portal, picking the consumption billing model, C# and securing the function with a function key, basically the default options.

I then added the C# function code (stored in GitHub), to my newly created Azure Function. This function code takes

  • The name of the VSTS instance
  • A personal access token (PAT) to access the VSTS instance
  • The GUID of the task to check for
  • And the version to check for

It then returns a JSON block with true or false based on whether the required task version can be found. If any of the parameters are invalid an API error is returned

By passing in this set of arguments my idea was that a single Azure Function could be used to check for the deployment of all my tasks.

Note: Now I do realise I could also create a release pipeline for the Azure Function, but I chose to just create it via the Azure Portal. I know this is not best practice, but this was just a proof of concept. As usual the danger here is that this proof of concept might be one of those that is too useful and lives forever!

To use the Azure Function

Using the Azure function is simple

    • Added an Azure Function gate to a VSTS release
    • Set the URL parameter for the Azure Function. This value can be found from the Azure Portal. Note that you don’t need the Function Code query parameter in the URL as this is provided with the next gate parameter. I chose to use a variable group variable for this parameter so it was easy to reuse between many CD pipelines
    • Set the Function Key parameter for the Azure Function, again you get this from the Azure Portal. This time I used a secure variable group variable
    • Set the Method parameter to POST
    • Set the Header content type as JSON
{
      "Content-Type": "application/json"
}
    • Set the Body to contain the details of the VSTS instance and Task to check. This time I used a mixture of variable group variables, release specific variables (the GUID) and environment build/release variables. The key here is I got the version from the primary release artifact $(BUILD.BUILDNUMBER) so the correct version of the tasks is tested for automatically
{
     "instance": "$(instance)",
     "pat": "$(pat)",
     "taskguid": "$(taskGuid)",
     "version": "$(BUILD.BUILDNUMBER)"
}
  • Finally set  the Advanced/Completion Event to ApiResponse with the success criteria of
    eq(root['Deployed'], 'true')

Once this was done I was able to use the Azure function as a VSTS gate as required

image

Summary

So I now have a gate that makes sure that for a given VSTS instance a task of a given version has been deployed.

If you need this functionality all you need to do is create your own Azure Function instance, drop in my code and configure the VSTS gate appropriately.

When equality == operator becomes available for JSONPATH in the REST API Gate I might consider a swap back to a basic REST call, it is less complex to setup, but we shall see. The Azure function model does appear to work well

Fixing a ‘git-lfs filter-process: gif-lfs: command not found’ error in Visual Studio 2017

I am currently looking at the best way to migrate a large legacy codebase from TFVC to Git. There are a number of ways I could do this, as I have posted about before. Obviously, I have ruled out anything that tries to migrate history as ‘that way hell lies’; if people need to see history they will be able to look at the archived TFVC instance. TFVC and Git are just too different in the way they work to make history migrations worth the effort in my opinion.

So as part of this migration and re-structuring I am looking at using Git Submodules and Git Large File System (LFS) to help divide the monolithic code base into front-end, back-end and shared service modules; using LFS to manage large media files used in integration test cases.

From the PowerShell command prompt, using Git 2.16.2, all my trials were successful, I could achieve what I wanted. However when I tried accessing my trial repos using Visual Studio 2017 I saw issues

Submodules

Firstly there are known limitations with Git submodules in Visual Studio Team Explorer. At this time you can clone a repo that has submodules, but you cannot manage the relationships between repos or commit to a submodule from inside Visual Studio.

This is unlike the Git command line, which allows actions to span a parent and child repo with a single command, Git just works it out if you pass the right parameters

There is a request on UserVoice to add these functions to Visual Studio, vote for it if you think it is important, I have.

Large File System

The big problem I had was with LFS, which is meant to work in Visual Studio since 2015.2.

Again from the command line operations were seamless, I just installed Git 2.16.2 via Chocolaty and got LFS support without installing anything else. So I was able to enable LFS support on a repo

git lfs install
git lfs track '*.bin'
git add .gitattributes

and manage standard and large (.bin) files without any problems

However, when I tried to make use of this cloned LFS enabled repo from inside Visual Studio by staging a new large .bin file I got an error ‘git-lfs filter-process: gif-lfs: command not found’

image

On reading around this error it suggested that the separate git-lfs package needed to be installed. I did this, making sure that the path to the git-lfs.exe (C:\Program Files\Git LFS) was in my path, but I still had the problem.

This is where I got stuck and hence needed to get some help from the Microsoft Visual Studio support team.

After a good deal tracing they spotted the problem. The path to git-lfs.exe was at the end of my rather long PATH list. It seems Visual Studio was truncating this list of paths, so as the error suggested Visual Studio could not find git-lfs.exe.

It is unclear to me whether the command prompt just did not suffer this PATH length issue, or was using a different means to resolve LFS feature. It should be noted from the command line LFS commands were available as soon as I installed Git 2.16.2. I did not have to add the Git LFS package.

So the fix was simple, move the entry for ‘C:\Program Files\Git LFS’ to the start of my PATH list and everything worked in Visual Studio.

It should be noted I really need to look at whether I need everything in my somewhat long PATH list. It’s been too long since I re-paved my laptop, there is a lot of strange bits installed.

Thanks again to the Visual Studio Support team for getting me unblocked on this.

Building private VSTS build agents using the Microsoft Packer based agent image creation model

Background

Having automated builds is essential to any good development process. Irrespective of the build engine in use, VSTS, Jenkins etc. you need to have a means to create the VMs that are running the builds.

You can of course do this by hand, but in many ways you are just extending the old ‘it works on my PC – the developer can build it only on their own PC’ problem i.e. it is hard to be sure what version of tools are in use. This is made worse by the fact it is too tempting for someone to remote onto the build VM to update some SDK or tool without anyone else’s knowledge.

In an endeavour to address this problem we need a means to create our build VMs in a consistent standardised manner i.e a configuration as code model.

At Black Marble we have been using Lability to build our lab environments and there is no reason we could not use the same system to create our VSTS build agent VMs

  • Creating base VHDs disk images with patched copies of Windows installed (which we update on a regular basis)
  • Use Lability to provision all the required tools – this would need to include all the associated reboots these installers would require. Noting that rebooting and restarting at the correct place, for non DSC based resources, is not Lability’s strongest feature i.e. you have to do all the work in custom code

However, there is an alternative. Microsoft have made their Packer based method of creating VSTS Azure hosted agents available on GitHub. Hence, it made sense to me to base our build agent creation system on this standardised image; thus allowing easier migration of builds between private and hosted build agent pools whether in the cloud or on premises, due to the fact they had the same tools installed.

The Basic Process

To enable this way of working I forked the Microsoft repo and modified the Packer JSON configuration file to build Hyper-V based images as opposed to Azure ones. I aimed to make as few changes as possible to ease the process of keeping my forked repo in sync with future changes to the Microsoft standard build agent. In effect replacing the builder section of the packer configuration and leaving the providers unaltered

So, in doing this I learnt a few things

Which ISO to use?

Make sure you use a current Operating System ISO. First it save time as it is already patched; but more importantly the provider scripts in the Microsoft configuration assume certain Windows features are available for installation (Containers with Docker support specifically) that were not present on the 2016 RTM ISO

Building an Answer.ISO

In the sample I found for the Packer hyperv-iso builder the AutoUnattended.XML answers file is provided on an ISO (as opposed to a virtual floppy as floppies are not support on Gen2 HyperV VMs). This means when you edit the answers file you need to rebuild the ISO prior to running Packer.

The sample script to do this has lines to ‘Enable UEFI and disable Non EUFI’; I found that if these lines of PowerShell were run the answers file was ignored on the ISO. I had to comment them out. It seems an AutoUnattended.XML answers file edited in VSCode is the correct encoding by default.

I also found that if I ran the PowerShell script to create the ISO from within VSCode’s integrated terminal the ISO builder mkisofs.exe failed with an internal error. However, it worked fine from a default PowerShell windows.

Installing the .NET 3.5 Feature

When a provider tried to install the .NET 3.5 feature using the command

Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature

it failed.

Seems this is a bug in Windows 2016 and the workaround is to specify the –Source location on the install media

Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature -Source “D:\sources\sxs”

Once the script was modified in this manner it ran without error

Well how long does it take?

The Packer process is slow, Microsoft say for an Azure VM it can take up to over 8 hours. A HyperV VM is no faster.

I also found the process a bit brittle. I had to restart the process a good few times as….

  • I ran out of disk space (no unsurprising this broke the process)
  • The new VM did not get a DHCP assigned IP address when connected to the network via the HyperV Default Switch. A reboot of my HyperV host PC fixed this.
  • Packer decided the VM had rebooted when it had not – usually due to a slow install of some feature or network issues
  • My Laptop went to sleep and caused one of the above problems

So I have a SysPrep’d VHD now what do I do with it now?

At this point I have options of what to do with this new exported HyperV image. I could manually create build agent VM instances.

However, it appeals to me to use this new VHD as a based image for Lability, replacing our default ‘empty patched Operating System’ image creation system, so I have a nice consistent way to provision VMs onto our Hyper-V servers.

Versioning your ARM templates within a VSTS CI/CD pipeline with Semantic Versioning

I wrote a post recently Versioning your ARM templates within a VSTS CI/CD pipeline. I realised since writing it that it does not address the issue of if you wish to version your ARM Templates using Semantic Versioning. My JSON versioning task I used did not support the option of not extracting a numeric version number e.g. 1.2.3.4 from a VSTS build number. To address this limitation I have modified my Version JSON file task to address.

This change to my task allows it to be used with the GitVersion VSTS task to manage the semantic versioning. For more details on GitVersion see the project documentation.

Hence, I my now able to generate a version number using GitVersion and pass this in to the versioning task directly using a build variable.

  • Add the GitVersion task at the start of the build, with its default parameters
  • Add my JSON versioning task with default parameters apart from
    • Version Number set to $(GitVersion.SemVer)
    • Use Version Number without Processing (Advanced) checked
    • Filename Pattern (Advanced) set to azuredeploy.json
    • Field to update (Advanced) set to contentVersion

image

In the logs you see output similar to the following

Source Directory: E:\Build2\_work\361\s
Filename Pattern: azuredeploy.json
Version Number/Build Number: 0.1.0-unstable.843
Use Build Number Directly: true
Version Filter to extract build number: \d+\.\d+\.\d+\.\d+
Version Format for JSON File: {1}.{2}.{3}
Field to update (all if empty): contentVersion
Output: Version Number Parameter Name: OutputedVersion
Using the provided build number without any further processing
JSON Version Name will be: 0.1.0-unstable.843
Will apply 0.1.0-unstable.843 to 12 files.
Updating the field 'contentVersion' version
Existing Tag: contentVersion": "1.0.0.0"
Replacement Tag: contentVersion": "0.1.0-unstable.843"
…

Creating test data for my Generate Release Notes Extension for use in CI/CD process

As part of the continued improvement to my CI/CD process I needed to provide a means so that whenever I test my Generate Release Notes Task, within it’s CI/CD process, new commits and work item associations are made. This is required because the task only picks up new commits and work items since the last successful running of a given build. So if the last release of the task extension was successful then the next set of tests have no associations to go in the release notes, not exactly exercising all the code paths!

In the past I added this test data by hand, a new manual commit to the repo prior to a release; but why have a dog and bark yourself? Better to automate the process.

This can done using a PowerShell file, run inline or stored in the builds source repo and run within a VSTS build. The code is shown below, you can pass in the required parameters, but I set sensible default for my purposes

For this PowerShell code to work you do need make some security changes to allow the build agent service user to write to the Git repo. This is documented by Microsoft.

The PowerShell task to run this code is placed in a build as the only task

image

This build is then triggered as part of the release process

image

Note that the triggering of this build has to be such that it runs on a non-blocking build agent as discussed in my previous posts. In my case I trigger the build to add the extra commits and work items just before triggering the validation build on my private Azure hosted agent.

Now, there is no reason you can’t just run the PowerShell directly within the release if you wanted to. I chose to use a build so that the build could be reused between different VSTS extension CI/CD pipelines; remember I have two Generate Release Note Extensions, PowerShell and NodeJS Based.

So another step to fully automating the whole release process.

How I fixed my problem that my VSTS Build Extension was too big to upload to the Marketplace

Whist adding a couple of new tasks to my VSTS Manifest Versioning Extension I hit the problem that VSIX package became too big to upload to the Marketplace.

The error I saw in my CI/CD VSTS pipeline was

##vso[task.logissue type=error;]error: 
Failed Request: Bad Request(400) - 
The extension package size '23255292 bytes' exceeds the 
maximum package size '20971520 bytes'

This extension now contains  eleven tasks, four of which are now NodeJS based as opposed to PowerShell. The issue here is whereas PowerShell tasks are usually a file or two of code and maybe a PSM module; NodeJS based ones, as well as my logic, always have a Node_Modules folder full of NPM modules needed for production use. This fact had caused a good deal of bloat in the VSIX package.

The solution was to address my poor management of NPM modules. As many of the versioning tasks are similar in logical structure i.e.

  1. They get a list of files
  2. Extract a version number from the build number
  3. Then apply this to one or more files in a product/task specific manner

there has been some cut and paste coding. This means that I have NPM modules in the tasks package.json file that were not needed for a given task. I could manually address this but there is an NPM module to help, DepCheck.

First install the DepCheck module

npm install depcheck –g

then run depcheck from the command line whist within your task’s folder. This returns a list of modules listed in the package.json that are not referenced in the code files. These can then be removed from the package.json.  e.g. I saw

Unused dependencies
* @types/node
* @types/q
* Buffer
* fs
* request
* tsd
Unused devDependencies
* @types/chai
* @types/mocha
* @types/node
* mocha-junit-reporter
* ts-loader
* ts-node
* typings

The important ones to focus on are the first block (non-development references), as these are the ones that are packaged with the production code in the VSIX; I was already pruning the node_module folder of development dependencies prior to creating the VSIX to remove devDependancies using the command

npm prune –production

I did find some of the listed modules strange, as I knew they really were needed and a quick test of removing them did show the code failed if they were missing. These are what depchecks documentation calls false alerts.

I found I could remove the @type/xxx and tsd references, which were the big ones, that are only needed in development when working in TypeScript. Once these were removed for all four of my NodeJS based tasks my VSIX dropped in size from 22Mb to 7Mb. So problem solved.

Announcing a new VSTS Extension for Starting and Stopping Azure DevTest Labs VMs

Background

I have recently been posting on using Azure to host private VSTS build/release agents to avoid agent queue deadlocking issues with more complex release pipelines.

One of the areas discussed is reducing cost of running a private agent in Azure by only running the private agent within a limited time range, when you guess it might be needed. I have done this using DevTest Labs Auto Start and Auto Stop features. This works, but is it not better to only start the agent VM when it is actually really needed, not when you guess it might be? I need this private agent only when working on my VSTS extensions, not something I do everyday. Why waste CPU cycles that are never used?

New VSTS Extension

I had expected there would already be a VSTS  extension to Start and Stop DevTest Lab VMs, but the Microsoft provided extension for DevTest Labs only provides tasks for the creation and deletion of VMs within a lab.

So I am pleased to announce the release of my new DevTest Labs VSTS Extension to fill this gap, adding tasks to start and stop a DevTest Lab VM on demand from within a build or a release.

My Usage

I have been able to use the tasks in this extension to start my private Azure hosted agent only when I need it for functional tests within a release.

However, they could equally be used for a variety of different testing scenarios where any form of pre-built/configured VMs needs to be started or stopped as opposed to slower processes of creating/deploying a new deployment of a DevTest lab VM.

In may case I added an extra agent phases to my release pipeline to start the VM prior to it being needed.

image

I could also have used another agent phase to stop the VM once the tests were completed. However, I made the call to leave the VM running and let DevTest Labs’ Auto Stop shut it down at the end of the day. The reason for this is that VM start up and shutdown is still fairly slow, a minute or two, and I often find I need to run a set of function tests a few times during my development cycle; so it is a bit more efficient to leave the VM running until the end of the day. Only taking the start-up cost once.

You may have course have different needs, hence providing both the Start and Stop Tasks

Development

This new extension aims to act as a supplement to the Microsoft provided Azure DevTest Lab Extension. Hence to make development and adoption easier, it uses exactly the same source code structure and task parameters as the Microsoft provided extension. The task parameters being:

  • Azure RM Subscription – Azure Resource Manager subscription to configure before running.
  • Source Lab VM ID – Resource ID of the source lab VM. The source lab VM must be in the selected lab, as the custom image will be created using its VHD file. You can use any variable such as $(labVMId), the output of calling Create Azure DevTest Labs VM, that contains a value in the form /subscriptions/{subId}/resourceGroups/{rgName}/providers/Microsoft.DevTestLab/labs/{labName}/virtualMachines/{vmName}.

The issue I had was that the DevTest Labs PowerShell API did not provide a command to start or stop a VM in a lab. I needed to load the Azure PowerShell library to use the Invoke-AzureRmResourceAction  command. This requires you first call Login-AzureRmAccount to authenticate prior to calling the actual Invoke-AzureRmResourceAction required. This required a bit of extra code to get and reuse the AzureRM endpoint to find the authentication details.

# Get the parameters
$ConnectedServiceName = Get-VstsInput -Name "ConnectedServiceName"
# Get the end point from the name passed as a parameter
$Endpoint = Get-VstsEndpoint -Name $ConnectedServiceName -Require
# Get the authentication details
$clientID = $Endpoint.Auth.parameters.serviceprincipalid
$key = $Endpoint.Auth.parameters.serviceprincipalkey
$tenantId = $Endpoint.Auth.parameters.tenantid
$SecurePassword = $key | ConvertTo-SecureString -AsPlainText -Force
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $clientID, $SecurePassword
# Authenticate
Login-AzureRmAccount -Credential $cred -TenantId $tenantId -ServicePrincipal

Important to note that for this code to work you have to set the task’s task.json to run PowerShell3 and package the Powershell VSTS API module in with the task.

"execution": {
  "PowerShell3": {
     "target": "$(currentDirectory)\\StartVM.ps1",
     "argumentFormat": "",
     "workingDirectory": "$(currentDirectory)"
    }
  }

If the folder structure is correct changing to PowerShell3 will automatically load the required module from the tasks ps_module folder

In Summary

I have certainly found this extension useful, and I have leant more that I had expect I would about VSTS endpoints and Azure authentication.

Hope it is useful to you too.

Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI – Using Artifacts

In my last post I discussed creating a private VSTS build agent within an Azure DevTest Lab on a VM with no GUI. It was pointed out to me today, by Rik Hepworth, that I had overlooked an obvious alternative way to get the VSTS agent onto the VM i.e. not having to use a series of commands at an RDP connected command prompt.

The alternative I missed is to use a DevTest Lab Artifact; in fact there is such an artifact available within the standard set in DevTest Labs. You just provide a few parameters and you are good to go.

image

Well you should be good to go, but there is an issue.

The PowerShell used to extract the downloaded Build Agent ZIP file does not work on a non-UI based Windows VM. The basic issue here is discussed in this post by my fellow ALM MVP Ricci Gian Maria. Luckily the fix is simple; I just used the same code to do the extraction of the ZIP file that I used in my previous post.

I have submitted this fix as a Pull Request to the DevTest Lab Team so hopefully the standard repository will have the fix soon and you won’t need to do a fork to create a private artifacts repo as I have.

Update 1st December 2017 The Pull Request to the DevTest Lab Team with the fixed code has been accepted and the fix is now in the master branch of the public artifact repo, so automatically available to all

Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI

Updates


As I posted recently I have been trying to add more functional tests to the VSTS based release CI/CD pipeline for my VSTS Extensions, and as I noted depending on how you want to run your tests e.g. trigger sub-builds, you can end up with scheduling deadlocks where a single build agent is scheduling the release and trying to run a new build. The answer is to use a second build agent in a different agent pool e.g. if the release is running on the Hosted build agent use a private build agent for the sub-build, or of course just pay for more hosted build instances.

The problem with a private build agent is where to run it. As my extensions are a personal project I don’t have a corporate Hyper-V server to run any extra private agents on, as I would have for an company projects. My MVP MSDN Azure benefits are the obvious answer, but I want any agents to be cheap to run, so I don’t burn through all my MSDN credits for a single build agent.

To this end I created a Windows Server 2016 VM in DevLabs (I prefer to create my VMs in DevLabs as it makes it easier tidying up of my Azure account) using an A0 sizing VM. This is tiny so cheap; I don’t intend to ever do a build on this agent, just schedule releases, so need to install few if any tools, so the size should not be an issue. To further reduce costs I used the auto start and stop features on the VM so it is only running during the hours I might be working. So I get an admittedly slow and limited private build agent but for less that $10 a month.

As the VM is small it makes sense to not run a GUI. This means when you RDP to the new VM you just get a command prompt. So how do you get the agent onto the VM and setup? You can’t just open a browser to VSTS or cut and paste a file via RDP, and I wanted to avoid the complexity of having to open up PowerShell remoting on the VM.

The process I used was as follows:

  1. In VSTS I created a new Agent Pool for my Azure hosted build agents
  2. In the Azure portal, DevLabs I created a new Windows Server 2016 (1709) VM
  3. I then RDP’d to my new Azure VM, in the open Command Prompt I ran PowerShell
    powershell
  4. As I was in my users home directory, I  cd’d into the downloads folder
    cd downloads
  5. I then ran the following PowerShell command to download the agent (you can get the current URI for the agent from your VSTS Agent Pool ‘Download Agent’ feature, but an old version will do as it will auto update.
    invoke-webrequest -UseBasicParsing -uri https://github.com/Microsoft/vsts-agent/releases/download/v2.124.0/vsts-agent-win7-x64-2.124.0.zip -OutFile vsts-agent-win7-x64-2.124.0.zip
  6. You can then follow the standard agent setup instructions from the VSTS Agent Pool ‘Download Agent’ feature
    mkdir \agent ; cd \agent
    PS
    Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory(“$HOME\Downloads\vsts-agent-win7-x64-2.124.0.zip”, “$PWD”)
  7. I then configured the agent to run as a service, I exited back to the command prompt to do this this, so the commands were
    exit
    config.cmd

I now had an other build agent pool to use in my CI/CD pipelines at a reasonable cost, and the performance was not too bad either.