A fix for Lability ‘Datafile not found’ error


I have been busy automating the provision of our private Azure DevOps Agents using Packer and Lability; a more details blog post is on the way. All has been going OK on my test rig, but when came to run the automation pipeline on our main build HyperV host I got an error

> Get-VmcConfigurationData : Datafile Environments\BuildAgent-VS2017\BuildAgent-VS2017.psd1 NOT FOUND. Exiting

But the file was there!

I check the default Lability  paths, but all these looked OK, and none pointed to my environment location on C: anyway

> Get-LabHostDefault

ConfigurationPath : D:\Virtualisation\Configuration
DifferencingBVhdPath : D:\Virtualisation\VMVirtualHardDisks
HotfixPath  : D:\Virtualisation\Hotfix
IsoPath : D:\Virtualisation\ISOs
ModuleCachePath : D:\Virtualisation\Modules
ParentVhdPath  : D:\Virtualisation\MasterVirtualHardDisks
RepositoryUri :

ResourcePath : D:\Virtualisation\Resources
ResourceShareName : Resources
DisableLocalFileCaching : False
DisableSwitchEnvironmentName : True
EnableCallStackLogging  : False
DismPath  : C:\Windows\System32\WindowsPowerShell\v1.0\Modules\Dism\Microsoft.Dism.PowerShell.dll


After a bit of digging in the lability PSM files I found the problem was the call

> Get-PSFConfigValue -Fullname “VMConfig.VMConfigsPath”

This returned nothing. A check on my development system showed this should return C:\VmConfigs, so I had a broken Lability install

So I tried the obvious fix, which was to set the missing value

> Set-PSFConfig -FullName “VMConfig.VMConfigsPath” -Value c:\VmConfigs

And it worked, my Lability installs ran without a problem

Major enhancements to my Azure DevOps Cross Platform Release Notes Extension

Over the past few days I have published two major enhancements to my Azure DevOps Cross Platform Release Notes Extension.

Added Support for Builds

Prior to version 2.17.x this extension could only be used in Releases. This was because it used Release specific calls provided in the Microsoft API to work out the work items and changesets/commits associated with the Release. This is unlike my older PowerShell based Release Notes Extension which was initially developed for Builds and only later enhanced to  work in Releases, but achieved this using my own logic to iterate across Builds associated with Releases to work out the associations.

With the advent of YAML multistage Pipelines the difference between a Build and a Release is blurring, so I thought it high time to add Build support to my Cross Platform Release Notes Extension. Which it now does.

Adding Tag Filters

In the Cross Platform Release Notes Extension you have been able to filter the work items returned in a generated document for a good while, but the filter was limited to a logical AND

i.e. if the filter was


All work items matched would have to have both the TAG1 and TAG2 set

Since 2.18.x there is now the option of a logic AND or an OR.

  • @@WILOOP:TAG1:TAG2@@ matches work items that have all tags (legacy behaviour for backward compatibility)
  • @@WILOOP[ALL]:TAG1:TAG2@@ matches work items that have all tags
  • @@WILOOP[ANY]:TAG1:TAG2@@ matches work items that any of the tags

Update 5th Dec

In 2.19.x there is also the option to filter on any field in a work item as well as tags

  • @@WILOOP[ALL]:System.Title=This is a title:TAG 1@@

For more details see the extension’s WIKI page


My plan is to at some point deprecate my PowerShell based Release Notes Extension. I have updated the documentation for this older extension state as much and to recommend the use of the newer Cross Platform Release Notes Extension.

At this time there is now little that this older extension can do that cannot be done by my newer Cross Platform Release Notes Extension. Moving to it I think makes sense for everyone for the…

  • Cross platform support
  • Use of the same means to find the associated items as the Microsoft UI to avoid confusion
  • Enhanced work item filtering

Lets see of the new features and updated advisory documentation effect the tow extension relative download statistics

Cannot queue a new build on Azure DevOps Server 2019.1 due to the way a SQL cluster was setup

I have recently been doing a TFS 2015 to Azure DevOps Server 2019.1 upgrade for a client. The first for a while, I have been working with Azure DevOps Service mostly of late. Anyway I saw an issue I had never seen before with any version of TFS, and I could find no information on the Internet.

The Problem

The error occurred when I tried to queue a new build after the upgrade, the build instantly failed with the error

‘The module being executed is not trusted, Either the owner of the database of the module need to be granted authenticate permission, or the module needs to be digitally signed. Warning: Null value is eliminated by an aggregate or other SET operation, The statement has been terminated’.

The Solution

It turns out the issue was the the client was using a enterprise wide SQL cluster to host the tfs_ databases. After the Azure DevOps upgrade the DBAs has enabled a trigger based logging system to monitor the databases and this was causing the error.

As soon as this logging was switched off everything worked as expected.

I would not recommend using such a logging tool for any ‘out the box’ database for a product such as TFS/Azure DevOps Server where the DBA team don’t own the database schema’s changes, as these databases will only occur if the product is upgraded

Release of my video on ‘Introduction to GitHub Actions’

I recently posted on my initial experiences with GitHub Actions. I had hoped to deliver a session on this subject a DDD 14 in Reading , I even got so far as to propose a session.

However, life happened and I found I could not make it to the event. So I decided to do the next best thing and recorded a video of the session. I event went as far as to try to get the ‘DDD event feel’ by recording in front of a ‘live audience’ at Black Marble’s offices.

A first look at GitHub Action – converting my Azure DevOps tasks to GitHub Actions


GitHub Actions open up an interesting new way to provide CI/CD automation for your GitHub projects, other than the historic options of Jenkins, Bamboo, Team City, Azure DevOps Pipelines etc. No longer do you have to leave the realm of GitHub to create a powerful CI/CD process or provision a separate system.

For people familiar with Azure DevOps YAML based Pipelines you will notice some common concepts in GitHub Actions. However, GitHub Action’s YAML syntax is different and Actions are not Tasks.You can’t just re-use your old Azure DevOps tasks.

So my mind quickly went to the question ‘how much work is involved to allow me to re-use my Azure DevOps Pipeline Tasks?’. I know I probably won’t be moving them to Docker, but surely I can reuse my Node based ones somehow?

The Migration Process

The first thing to consider is ‘are they of any use?’.

Any task that used the Azure DevOps API was going to need loads of work, if even relevant on GitHub. But, my Versioning tasks seemed good candidates. They still needed some edits, such as the logic to extract a version number from a the build number needed to be removed. This is because GitHub Actions have no concept of build numbers (it is recommended that versioning is done using SemVer and branching).

Given all this I picked one for migration, my JSONVersioner

The first step was to create a new empty GitHub repo for my new Action. I did this using the JavaScript template and followed the Getting Started instructions. This allowed me to make sure I had a working starting point.

I then copied in my JSON file versioner task into the repo bit by bit

  • Renamed my entry ApplyVersionToJSONFile.ts file to main.ts to keep inline with the template standard
  • Copied over the AppyVersionToJSONFuncitons.js
  • I removed any Azure DevOps specific code that was not needed.
  • In both files swapped the references to “vsts-task-lib/task” to “@actions/core” and update the related function calls to use
    • core.getInput()
    • core.debug()
    • core.warning()
    • core.SetFailed()
  • Altered my handling of input variables defaults to use the GitHub Actions as opposed to Azure DevOps Pipeline variables (to find the current working folder)
  • Migrated the manifest from the task.json
    to the action.yaml
  • Updated the readme.md with suitable usage details.

And that was basically it, the Action just worked, I could call my Action from a test workflow in another GitHub repo

However, I did decided to do a bit more work

  • I moved my Mocha/Chai based tests over to use Jest, again to keep inline with the template example. This was actually the main area of effort for me. Jest runs it’s tests async, and this caused me problem with my temporary file handling that had to be reworked. I also took the chance to improve the tests handing of the JSON comparison, making it more resilient for cross platform testing.
  • I also added TSLint to the npm build process, something I do for all my TypeScript based projects to keep up code quality


So the basic act of migration from Azure DevOps Pipeline Extension to GitHub Actions is not that hard if you take it step by step.

The difficultly will be with what your tasks do, are they even relevant to GitHub Actions? And are any APIs you need available?

So migration of Azure DevOps Extension Tasks to GitHub Actions is not an impossible task, have a look at my source at JSONFileVersioner or in the actual task in the GitHub Marketplace with the usage



runs-on: ubuntu-latest



node-version: [12.x]


     - uses: actions/checkout@v1

    - uses: rfennell/JSONFileVersioner@v1


Path: ''

Field: 'version'

FilenamePattern: '.json'

Recursion: 'true'

    - name: Use Node.js ${{ matrix.node-version }}

uses: actions/setup-node@v1


node-version: ${{ matrix.node-version }}

    - name: npm install, build, and test

run: |

npm install

npm run build --if-present

npm test


CI: true

There is a nice series of posts on Actions from Microsoft’s Abel Wang – Github Actions 2.0 Is Here!!!

Strange issue with multiple calls to the same REST WebClient in PowerShell

Hit a strange problem today trying to do a simple Work Item update via the Azure DevOps REST API.

To do a WI update you need to call the REST API

  • Using the verb PATCH
  • With the Header “Content-Type” set to “application/json-patch+json”
  • Include in the Body the current WI update revision (to make sure you are updating the current version)

So the first step is to get the current WI values to find the current revision.

So my update logic was along the lines of

  1. Create new WebClient with the Header “Content-Type” set to “application/json-patch+json”
  2. Do a Get call to API to get the current work item
  3. Build the update payload with my updated fields and the current revision.
  4. Do a PATCH call to API using the client created in step 1 to update the current work item

Problem was at Step 4 I got a 400 error. A general error, not too helpful

After much debugging I spotted the issue was that after Step 2. my WebClient’s Headers had changed, I had lost the content type – no idea why.

It all started to work if I recreated my WebClient after Step 2, so something like (in PowrShell)

Function Get-WebClient



[string]$ContentType = "application/json"


$wc = New-Object System.Net.WebClient
$pair = ":${password}"
$bytes = [System.Text.Encoding]::ASCII.GetBytes($pair)
$base64 = [System.Convert]::ToBase64String($bytes)
$wc.Headers.Add(“Authorization”,"Basic $base64")
$wc.Headers["Content-Type"] = $ContentType


function Update-WorkItemTitle {



$baseUri ,


$wc = Get-WebClient -pat $pat -ContentType "application/json-patch+json"
$uri = "$($baseUri)/$teamproject/_apis/wit/workitems/$($workItemID)?api-version=5.1"

# you can only update a work item if you also pass in the rev, this makes sure you are updating lastest version
$jsondata = $wc.DownloadString($uri) | ConvertFrom-Json

$wc = Get-WebClient -pat $pat -ContentType "application/json-patch+json"

$data = @(
            op    = "test";
            path  = "/rev";
            value = $jsondata.Rev
            op    = "add";
            path  = "/fields/System.Title";
            value = $title
) | ConvertTo-Json

$jsondata = $wc.UploadString($uri, "PATCH", $data) | ConvertFrom-Json


Authentication loops swapping organisations in Azure DevOps

I have recently been getting a problem swapping between different organisations in Azure DevOps. It happens when I swap between Black Mable ones and customer ones, where each is back by different Azure Active Directory (AAD) but I am using the same credentials; because I am either a member of that AAD or a guest.

The problem is I get into an authentication loop. It happens to be in Chrome, but you might find the same problem in other browsers.

It seems to be a recent issue, maybe related to MFA changes in AAD?

I used to be re-promoted for my ID when I swapped organisations in a browser tab, but not asked for further authentication

However, now the following happens

  • I login to an organisation without a problem e.g https://dev.azure.com/someorg using ID, password and MFA
  • In the same browser window, when I connect to another organisation e.g. https://dev.azure.com/someotherorg 
  • I am asked to pick an account, then there is the MFA challenge, but then go back to the login
  • …. and repeat.

The fix is to go in the browser tab to https://dev.azure.com. As you are already authenticated you will be able to sign out, then all is OK, you can login again.

The other options is to make even more use of Chrome People; one ‘person’ per customer, as opposed to my current usage on one ‘person’ per ID

You can’t use Azure DevOps Pipeline Gates to check services behind a firewall

I have recently be working on a release pipeline that deploys to a server behind a corporate firewall. This is done using an Azure DevOps private build agent and works fine.

As the service is a basic REST service and takes a bit of time to start-up I though a gate was a perfect way to pause the release pipeline until service was ready for the automated tests.

However, I hit a problem, the gates always failed as the internal server could not be resolved.

After a bit of thought I realised why. Gates as actual agentless tasks, they don’t run on the agent but on the server, so are outside the firewall. They could never connect to the private service without ports being opened, which was never going to happen.

So at this point in time I can’t use gates on this pipeline. Any similar logic to do the same job would have to be developed as scripts I can run on an agent.

Review of ‘Azure DevOps Server 2019 Cookbook’ – well worth getting

It always amazes me that people find time to write tech books whilst having a full time job. So given the effort I know it will have been, it is great to see an update to  Tarun Arora and Utkarsh Sigihalli’s book ‘Azure DevOps Server 2019 Cookbook’.

Azure DevOps Server 2019 Cookbook - Second Edition

I do like their format of ‘recipes’  that walk through common requirements. I find it particularly interesting that for virtually each recipes there is an associated Azure DevOps Extension that enhances the experience. It speaks well of the research the authors have done and the richness and variety of the 3rd party extensions in the Azure DevOps Marketplaces

I think because of this format there is something in this book for everyone, whether new to Azure DevOps Server 2019 or someone who has been around the product since the days of TFS 2005.

In my opinion, it is well worth having a copy on your shelf, whether physical or virtual

Azure DevOps Repos branch build policies not triggering when expected in PRs – Solved

I recently hit a problem with builds triggered by branch policies in Azure DevOps Repos. With the help of Microsoft I found out the problem and I thought it worth writing up uncase others hit the issue.



Assume you have a Git repo with source for the UI, backend Services and common code in sub folders

/ [root]

Branch Policies

On the Master branch there are a policies of running

  • one build for anything in the UI folder/project or common folder/project
  • and a different build for anything in the Services folder/project or common folder/project

These build were filtered by path using the filters

/UX; /Common
/Services; /Common

The Issue

I discovered the problem by doing the following

  • Create a PR for some work that effects the UI project
  • As expected the UI build triggers
  • Update the PR with a second commit for the Services code
  • The Service build is not triggered

The Solution

The fix was simple it turns out. Remove the spaces from the filter paths so they become


Once this was done the builds triggered as expected.

Thanks again to the Azure DevOps Product Group for the help