Migrating a GUI based build to YAML in Azure DevOps Pipelines

Introduction

I use Azure DevOps Pipelines for the build and release of my Azure DevOps Pipeline extensions, I previously detailed my process here .

For a good few months now YAML builds have been available. These provide the key advantage that the build is defined in a YAML text file that is stored with your product’s source code, thus allowing you to more easily track build changes. Also bulk editing becomes easier as a simple text editor can be used.

I have been putting off moving my current GUI based builds for as there is a bit of work, this post document then step.

Process

Getting the old build content

First I created a new branch in my local copy of my GitHub repo that stores the source for my extensions

I then created an empty file azure-pipelines-build.yaml the same folder as the root of the extension I was replacing the build for. I created the empty text file. I did this as the current create new build UI allows you to pick a file or a create one, but if you create one it gives you no control as to where or how it is named

In you existing build I then clicked the pipeline level ‘View YAML’

image 

Note:  Initially I found this link disabled, but if you click around the UI, into the task details, variables etc, it eventually becomes enabled. I have no idea why.

Copy this YAML into you newly created azure-pipelines-build.yaml file, committed the file and pushed it GitHub as the new branch.

Creating the YAML build

I then created a new YAML based build, picking in my case GitHub as the source host, the correct branch, and correct file.

This YAML contains the  core of what is needed, but the build was missing some some items such as triggers, build number and variable.

I added

  • the name (build number)
  • the PR triggers to the YAML

to the .YAML file, but decided to declare my variables as they contained secrets within the build definition in Azure DevOps.

The final YAML file was can be viewed here

What I fixed in passing

In the past I used to package up my extensions twice, once packaged as private (for testing) and once as public. This was due to the limitation of the Azure DevOps Marketplace and the release tasks I was using at the time. Whilst passing a took the chance to change to only building the public VSIX package, but updated my release pipeline process to dynamically inject the settings for private testing. This was done using the newer Azure DevOps Extensions Tasks.

As I side note I had to upgrade to these newer release tasks anyway as the older ones had ceased to work due to using old API calls

Swapping in the new build into the release process

To replace the old GUI build with the new YAML build I did the following

  • Renamed my old GUI build and disabled this (the disable is vital else it continues to be triggered by the GitHub PRs, even if the triggers are removed in the build)
  • Renamed my new YAML build to the old GUI build name (not vital, but it felt neater)
  • Updated my release pipeline to pick the new YAML build as opposed to the old GUI build. Even though the names were the same, their internal IDs are not, so this needs to be swapped. I made sure my ‘source alias’ did not change, so I did not have to make other changes to my release pipeline. 

Once this was done I triggered a new GitHub PR and everything worked as expects.

What Next

I have kept the old build about just in case there is a problem I have not spotted, but I intend to delete this soon.

I now need to make the same changes for all my other build. The only difference for from this process will be for builds that make use of Task Groups, such as all those for Node based extensions. Task Groups cannot be exported as YAML at this time, so I will have to manually rebuilding these steps in a text editor. So more prone to human error, but I think it needs to be done.

So a nice back burner project. I will probably update them as release new versions of extensions.

Azure DevOps Services & Server Alerts DSL – an alternative to TFS Aggregator?

Whilst listening to a recent  Radio TFS it was mentioned that TFS Aggregator uses the C# SOAP based Azure DevOps APIs; hence needed a major re-write as these APIs are being deprecated.

Did you know that there was a REST API alternative to TFS Aggregator?

My Azure DevOps Services & Server Alerts DSL is out there, and has been for a while, but I don’t think used by many people. It aims to do the same as TFS Aggregator, but is based around Python scripting.

However, I do have to say it is more limited in flexibility as it has only been developed for my (and a few of my clients needs), but its an alternative that is based on the REST APIs. 

Scripts are of the following form, this one changes the state of a work item if all it children are done

import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv[0] == "workitem.updated" : 
    wi = GetWorkItem(int(sys.argv[1]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage("Work item '" + str(wi.id) + "' has no parent")
    else:
        LogInfoMessage("Work item '" + str(wi.id) + "' has parent '" + str(parentwi.id) + "'")

        results = [c for c in GetChildWorkItems(parentwi) if c["fields"]["System.State"] != "Done"]
        if  len(results) == 0 :
            LogInfoMessage("All child work items are 'Done'")
            parentwi["fields"]["System.State"] = "Done"
            UpdateWorkItem(parentwi)
            msg = "Work item '" + str(parentwi.id) + "' has been set as 'Done' as all its child work items are done"
            SendEmail("richard@blackmarble.co.uk","Work item '" + str(parentwi.id) + "' has been updated", msg)
            LogInfoMessage(msg)
        else:
            LogInfoMessage("Not all child work items are 'Done'")
else:
	LogErrorMessage("Was not expecting to get here")
	LogErrorMessage(sys.argv)

I have recently done a fairly major update to the project. The key changes are:

  • Rename of project, repo, and namespaces to reflect Azure DevOps (the namespace change is a breaking change for existing users)
  • The scripts that are run can now be
    • A fixed file name for the web instance running the service
    • Based on the event type sent to the service
    • Use the subscription ID, thus allowing many scripts (new)
  • A single instance of the web site running the events processor can now handle calls from many Azure DevOps instances.
  • Improved installation process on Azure (well at least tried to make the documentation clearer and sort out a couple of MSDeploy issues)

Full details are on the project can be seen on the solutions WIKI, maybe you will find it of use. Let me know if the documentation is good enough

Using Paths in PR Triggers on an Azure DevOps Pipelines Builds

When I started creating OSS extensions for Azure DevOps Pipelines (starting on TFSPreview, then VSO, then VSTS and now named Azure DevOps) I made the mistake of putting all my extensions in a single GitHub repo. I thought this would make life easier, I was wrong, it should have been a repo per extension.

I have considered splitting the GitHub repo, but as a number of people have forked it, over 100 at the last count, I did not want to start a chain of chaos for loads of people.

This initial choice has meant that until very recently I could not use the Pull Request triggers in Azure DevOps Pipelines against my GitHub repo. This was because all builds associated with the repo triggered on any extension PR. So, I had to trigger builds manually, providing the branch name by hand. A bit of a pain, and prone to error.

I am pleased to say that with the roll out of Sprint 140 we now get the option to add a path filter to PR triggers on builds linked to GitHub repo; something we have had for Azure DevOps hosted Git repos since Sprint 126.

So now my release process is improved. If I add a path filter as shown below, my build and hence release process trigger on a PR just as I need.

image

It is just a shame that the GitHub PR only checks the build, not the whole release, before saying all is OK. Hope we see linking to complete Azure DevOps Pipelines in the future.

Making sure when you use VSTS build numbers to version Android Packages they can be uploaded to the Google Play Store

Background

I have a VSTS build extension that can apply a VSTS generated build number to Android APK packages. This takes a VSTS build number and generates, and applies, the Version Name (a string) and Version Code (an integer) to the APK file manifest.

The default parameters mean that the behaviour of this task is to assume (using a regular expression) the VSTS build number has at least three fields major.minor.patch e.g. 1.2.3, and uses the 1.2 as the Version Name and the 3 as the Version Code.

Now, it is important to note that the Version Code must be a integer between 1 and 2100000000 and for the Google Play Store it must be incrementing between versions.

So maybe these default parameter values for this task are not the best options?

The problem the way we use the task

When we use the Android Manifest Versioning task for our tuServ Android packages we use different parameter values, but we recently found these values still cause a problem.

Our VSTS build generates  build numbers with four parts $(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)

  • $(Major) – set as a VSTS variable e.g. 1
  • $(Minor) – set as a VSTS variable e.g. 2
  • $(Year:yy)$(DayOfYear) – the day for the year e.g. 18101
  • $(rev:r) – the build count for the build definition for the day e.g. 1

So we end up with build numbers in the form 1.2.18101.1

The Android version task is set in the build to make

  • the Version Number {1}.{2}.{3}.{4}  – 1.2.18101.1
  • the Version Code {1}{2}{3}{4} – 12181011

The problem is if we do more than 9 builds in a day, which is likely due to our continuous integration process, and release one of the later builds to the Google Play store, then the next day any build with a lower revision than 9 cannot be released to the store as its Version Code is lower than the previously published one e.g.

  • day 1 the published build is 1.2.18101.11 so the Version Code is 121810111
  • day 2 the published build is 1.2.18102.1 so the Version Code is
    12181021

So the second Version Code is 10x smaller, hence the package cannot be published.

The Solution

The answer in the end was straightforward and found by one of our engineers Peter (@sarkimedes). It was to change the final block of the VSTS build number to $(rev:rrr), as detailed in the VSTS documentation. Thus zero padding the revision from .1 to .001. This allows up to 1000 builds per day before the problem of altering the Version Code order of magnitude problem occurs. Obviously, if you think you might do more than 1000 internal builds in a day you could zero pack as many digits as you want.

So using the new build version number

  • day 1 the published build is 1.2.18101.011 so the Version Code is
    1218101011
  • day 2 the published build is 1.2.18102.001 so the Version Code is
    1218102001

So a nice fix without any need to alter the Android Manifest Versioning task’s code. However, changing the default Version Code parameter to {1}{2}{3} is probably advisable.

    Building private VSTS build agents using the Microsoft Packer based agent image creation model

    Background

    Having automated builds is essential to any good development process. Irrespective of the build engine in use, VSTS, Jenkins etc. you need to have a means to create the VMs that are running the builds.

    You can of course do this by hand, but in many ways you are just extending the old ‘it works on my PC – the developer can build it only on their own PC’ problem i.e. it is hard to be sure what version of tools are in use. This is made worse by the fact it is too tempting for someone to remote onto the build VM to update some SDK or tool without anyone else’s knowledge.

    In an endeavour to address this problem we need a means to create our build VMs in a consistent standardised manner i.e a configuration as code model.

    At Black Marble we have been using Lability to build our lab environments and there is no reason we could not use the same system to create our VSTS build agent VMs

    • Creating base VHDs disk images with patched copies of Windows installed (which we update on a regular basis)
    • Use Lability to provision all the required tools – this would need to include all the associated reboots these installers would require. Noting that rebooting and restarting at the correct place, for non DSC based resources, is not Lability’s strongest feature i.e. you have to do all the work in custom code

    However, there is an alternative. Microsoft have made their Packer based method of creating VSTS Azure hosted agents available on GitHub. Hence, it made sense to me to base our build agent creation system on this standardised image; thus allowing easier migration of builds between private and hosted build agent pools whether in the cloud or on premises, due to the fact they had the same tools installed.

    The Basic Process

    To enable this way of working I forked the Microsoft repo and modified the Packer JSON configuration file to build Hyper-V based images as opposed to Azure ones. I aimed to make as few changes as possible to ease the process of keeping my forked repo in sync with future changes to the Microsoft standard build agent. In effect replacing the builder section of the packer configuration and leaving the providers unaltered

    So, in doing this I learnt a few things

    Which ISO to use?

    Make sure you use a current Operating System ISO. First it save time as it is already patched; but more importantly the provider scripts in the Microsoft configuration assume certain Windows features are available for installation (Containers with Docker support specifically) that were not present on the 2016 RTM ISO

    Building an Answer.ISO

    In the sample I found for the Packer hyperv-iso builder the AutoUnattended.XML answers file is provided on an ISO (as opposed to a virtual floppy as floppies are not support on Gen2 HyperV VMs). This means when you edit the answers file you need to rebuild the ISO prior to running Packer.

    The sample script to do this has lines to ‘Enable UEFI and disable Non EUFI’; I found that if these lines of PowerShell were run the answers file was ignored on the ISO. I had to comment them out. It seems an AutoUnattended.XML answers file edited in VSCode is the correct encoding by default.

    I also found that if I ran the PowerShell script to create the ISO from within VSCode’s integrated terminal the ISO builder mkisofs.exe failed with an internal error. However, it worked fine from a default PowerShell windows.

    Installing the .NET 3.5 Feature

    When a provider tried to install the .NET 3.5 feature using the command

    Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature

    it failed.

    Seems this is a bug in Windows 2016 and the workaround is to specify the –Source location on the install media

    Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature -Source “D:\sources\sxs”

    Once the script was modified in this manner it ran without error

    Well how long does it take?

    The Packer process is slow, Microsoft say for an Azure VM it can take up to over 8 hours. A HyperV VM is no faster.

    I also found the process a bit brittle. I had to restart the process a good few times as….

    • I ran out of disk space (no unsurprising this broke the process)
    • The new VM did not get a DHCP assigned IP address when connected to the network via the HyperV Default Switch. A reboot of my HyperV host PC fixed this.
    • Packer decided the VM had rebooted when it had not – usually due to a slow install of some feature or network issues
    • My Laptop went to sleep and caused one of the above problems

    So I have a SysPrep’d VHD now what do I do with it now?

    At this point I have options of what to do with this new exported HyperV image. I could manually create build agent VM instances.

    However, it appeals to me to use this new VHD as a based image for Lability, replacing our default ‘empty patched Operating System’ image creation system, so I have a nice consistent way to provision VMs onto our Hyper-V servers.

    Versioning your ARM templates within a VSTS CI/CD pipeline with Semantic Versioning

    I wrote a post recently Versioning your ARM templates within a VSTS CI/CD pipeline. I realised since writing it that it does not address the issue of if you wish to version your ARM Templates using Semantic Versioning. My JSON versioning task I used did not support the option of not extracting a numeric version number e.g. 1.2.3.4 from a VSTS build number. To address this limitation I have modified my Version JSON file task to address.

    This change to my task allows it to be used with the GitVersion VSTS task to manage the semantic versioning. For more details on GitVersion see the project documentation.

    Hence, I my now able to generate a version number using GitVersion and pass this in to the versioning task directly using a build variable.

    • Add the GitVersion task at the start of the build, with its default parameters
    • Add my JSON versioning task with default parameters apart from
      • Version Number set to $(GitVersion.SemVer)
      • Use Version Number without Processing (Advanced) checked
      • Filename Pattern (Advanced) set to azuredeploy.json
      • Field to update (Advanced) set to contentVersion

    image

    In the logs you see output similar to the following

    Source Directory: E:\Build2\_work\361\s
    Filename Pattern: azuredeploy.json
    Version Number/Build Number: 0.1.0-unstable.843
    Use Build Number Directly: true
    Version Filter to extract build number: \d+\.\d+\.\d+\.\d+
    Version Format for JSON File: {1}.{2}.{3}
    Field to update (all if empty): contentVersion
    Output: Version Number Parameter Name: OutputedVersion
    Using the provided build number without any further processing
    JSON Version Name will be: 0.1.0-unstable.843
    Will apply 0.1.0-unstable.843 to 12 files.
    Updating the field 'contentVersion' version
    Existing Tag: contentVersion": "1.0.0.0"
    Replacement Tag: contentVersion": "0.1.0-unstable.843"
    …
    

    Creating test data for my Generate Release Notes Extension for use in CI/CD process

    As part of the continued improvement to my CI/CD process I needed to provide a means so that whenever I test my Generate Release Notes Task, within it’s CI/CD process, new commits and work item associations are made. This is required because the task only picks up new commits and work items since the last successful running of a given build. So if the last release of the task extension was successful then the next set of tests have no associations to go in the release notes, not exactly exercising all the code paths!

    In the past I added this test data by hand, a new manual commit to the repo prior to a release; but why have a dog and bark yourself? Better to automate the process.

    This can done using a PowerShell file, run inline or stored in the builds source repo and run within a VSTS build. The code is shown below, you can pass in the required parameters, but I set sensible default for my purposes

    For this PowerShell code to work you do need make some security changes to allow the build agent service user to write to the Git repo. This is documented by Microsoft.

    The PowerShell task to run this code is placed in a build as the only task

    image

    This build is then triggered as part of the release process

    image

    Note that the triggering of this build has to be such that it runs on a non-blocking build agent as discussed in my previous posts. In my case I trigger the build to add the extra commits and work items just before triggering the validation build on my private Azure hosted agent.

    Now, there is no reason you can’t just run the PowerShell directly within the release if you wanted to. I chose to use a build so that the build could be reused between different VSTS extension CI/CD pipelines; remember I have two Generate Release Note Extensions, PowerShell and NodeJS Based.

    So another step to fully automating the whole release process.

    How I fixed my problem that my VSTS Build Extension was too big to upload to the Marketplace

    Whist adding a couple of new tasks to my VSTS Manifest Versioning Extension I hit the problem that VSIX package became too big to upload to the Marketplace.

    The error I saw in my CI/CD VSTS pipeline was

    ##vso[task.logissue type=error;]error: 
    Failed Request: Bad Request(400) - 
    The extension package size '23255292 bytes' exceeds the 
    maximum package size '20971520 bytes'

    This extension now contains  eleven tasks, four of which are now NodeJS based as opposed to PowerShell. The issue here is whereas PowerShell tasks are usually a file or two of code and maybe a PSM module; NodeJS based ones, as well as my logic, always have a Node_Modules folder full of NPM modules needed for production use. This fact had caused a good deal of bloat in the VSIX package.

    The solution was to address my poor management of NPM modules. As many of the versioning tasks are similar in logical structure i.e.

    1. They get a list of files
    2. Extract a version number from the build number
    3. Then apply this to one or more files in a product/task specific manner

    there has been some cut and paste coding. This means that I have NPM modules in the tasks package.json file that were not needed for a given task. I could manually address this but there is an NPM module to help, DepCheck.

    First install the DepCheck module

    npm install depcheck –g

    then run depcheck from the command line whist within your task’s folder. This returns a list of modules listed in the package.json that are not referenced in the code files. These can then be removed from the package.json.  e.g. I saw

    Unused dependencies
    * @types/node
    * @types/q
    * Buffer
    * fs
    * request
    * tsd
    Unused devDependencies
    * @types/chai
    * @types/mocha
    * @types/node
    * mocha-junit-reporter
    * ts-loader
    * ts-node
    * typings

    The important ones to focus on are the first block (non-development references), as these are the ones that are packaged with the production code in the VSIX; I was already pruning the node_module folder of development dependencies prior to creating the VSIX to remove devDependancies using the command

    npm prune –production

    I did find some of the listed modules strange, as I knew they really were needed and a quick test of removing them did show the code failed if they were missing. These are what depchecks documentation calls false alerts.

    I found I could remove the @type/xxx and tsd references, which were the big ones, that are only needed in development when working in TypeScript. Once these were removed for all four of my NodeJS based tasks my VSIX dropped in size from 22Mb to 7Mb. So problem solved.

    Added a new JSON version task to my VSTS Version Extension

    In response to requests on the VSTS Marketplace I have added a pair of tasks to added/edit entries in a .JSON format files.

    The first is for adding a version to a file like a package.json file e.g.

    {
    "name": "myapp",
    "version": "1.0.0",
    "license": "MIT"
    }

    The second is designed for angular environment.ts file e.g.

    export const environment = {
    production: true,
    version: '1.0.0.0'
    };

    But I bet people find other uses, they always do.

    You can find the extension in the marketplace, you need 1.31.x or later to see the new versioner tasks

    Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI – Using Artifacts

    In my last post I discussed creating a private VSTS build agent within an Azure DevTest Lab on a VM with no GUI. It was pointed out to me today, by Rik Hepworth, that I had overlooked an obvious alternative way to get the VSTS agent onto the VM i.e. not having to use a series of commands at an RDP connected command prompt.

    The alternative I missed is to use a DevTest Lab Artifact; in fact there is such an artifact available within the standard set in DevTest Labs. You just provide a few parameters and you are good to go.

    image

    Well you should be good to go, but there is an issue.

    The PowerShell used to extract the downloaded Build Agent ZIP file does not work on a non-UI based Windows VM. The basic issue here is discussed in this post by my fellow ALM MVP Ricci Gian Maria. Luckily the fix is simple; I just used the same code to do the extraction of the ZIP file that I used in my previous post.

    I have submitted this fix as a Pull Request to the DevTest Lab Team so hopefully the standard repository will have the fix soon and you won’t need to do a fork to create a private artifacts repo as I have.

    Update 1st December 2017 The Pull Request to the DevTest Lab Team with the fixed code has been accepted and the fix is now in the master branch of the public artifact repo, so automatically available to all