Major enhancements to my Azure DevOps Cross Platform Release Notes Extension

Over the past few days I have published two major enhancements to my Azure DevOps Cross Platform Release Notes Extension.

Added Support for Builds

Prior to version 2.17.x this extension could only be used in Releases. This was because it used Release specific calls provided in the Microsoft API to work out the work items and changesets/commits associated with the Release. This is unlike my older PowerShell based Release Notes Extension which was initially developed for Builds and only later enhanced to  work in Releases, but achieved this using my own logic to iterate across Builds associated with Releases to work out the associations.

With the advent of YAML multistage Pipelines the difference between a Build and a Release is blurring, so I thought it high time to add Build support to my Cross Platform Release Notes Extension. Which it now does.

Adding Tag Filters

In the Cross Platform Release Notes Extension you have been able to filter the work items returned in a generated document for a good while, but the filter was limited to a logical AND

i.e. if the filter was

@@WILOOP:TAG1:TAG2@@

All work items matched would have to have both the TAG1 and TAG2 set

Since 2.18.x there is now the option of a logic AND or an OR.

  • @@WILOOP:TAG1:TAG2@@ matches work items that have all tags (legacy behaviour for backward compatibility)
  • @@WILOOP[ALL]:TAG1:TAG2@@ matches work items that have all tags
  • @@WILOOP[ANY]:TAG1:TAG2@@ matches work items that any of the tags

Update 5th Dec

In 2.19.x there is also the option to filter on any field in a work item as well as tags

  • @@WILOOP[ALL]:System.Title=This is a title:TAG 1@@

For more details see the extension’s WIKI page

Futures

My plan is to at some point deprecate my PowerShell based Release Notes Extension. I have updated the documentation for this older extension state as much and to recommend the use of the newer Cross Platform Release Notes Extension.

At this time there is now little that this older extension can do that cannot be done by my newer Cross Platform Release Notes Extension. Moving to it I think makes sense for everyone for the…

  • Cross platform support
  • Use of the same means to find the associated items as the Microsoft UI to avoid confusion
  • Enhanced work item filtering

Lets see of the new features and updated advisory documentation effect the tow extension relative download statistics

Some Points with working with SQL Always On Availability Groups (when doing a TFS Migration)

Just shy of a full year from my last post. Crikey Smile

So it’s another TFS related post, but not specifically to do with TFS. I still perform TFS migrations/upgrades from time to time here at Black Marble as we still have many customers who can’t migrate to Azure DevOps for one reason or another.

A few weeks ago we performed a TFS upgrade for a customer, bringing them from TFS 2013.1 to Azure DevOps Server 2019.0.1.

The new SQL Server installation that underpinned this new environment was a SQL Server 2016 Always On Availability Group, with 2 synchronous nodes and one async node contained in a figurative shed somewhere a few miles away.

Now our initial migration/upgrade to this server had gone well, there a dozen or so smaller collections we moved over with the initial migration but due to deadlines/delivery/active sprints etc there were a couple of teams who’s collections couldn’t be moved in that initial setup. So we came back a few months later to move these last few collections which coincidentally happened to be the biggest.

The biggest of the collections was approximately ~160gb in size, which wasn’t the biggest collection I’d seen, (not by a looooong shot) but not small by any means.

Could we get this thing into the Availability Group? Nope. Every time we completed the “Promote Database” Wizard the database would fail to appear in the “synchronized” or even “synchronizing” state on any of the nodes. Inspection of the file system on each of the secondary nodes didn’t even show the database files as being present.

So we had a think about it (rinsed repeated a few times) and someone called their SQL DBA friend who told us the GUI based wizard hates anything over ~150gb. We should let the wizard generate the script for us, and run it ourselves.

Well lo and behold the promotion of the databases worked…mostly. We saw the files appear on disk on all servers in the group, and they appeared as databases in the availability group but with warnings on them that they still weren’t syncing.

So on a hunch I re-ran the GUI wizard to promote the databases again, which among other things performs a series of validation checks. The key validation check (and the backbone of my hunch) is “is this database already in an AG?”. The answer was yes, and this seemed to shock SSMS into recognizing that the job was complete and the synchronization status of the synchronous and asynchronous replicas jumped into life.

My guess is that promoting a db into an AG is a bit of a brittle process, and if some thread in the background dies or the listener object in memory waiting for that thread dies then SSMS never knows what the state of the job is. Doing it via script is more resilient, but still not bullet proof.

Also worth noting for anyone who isn’t a SQL boffin, an async replica will never show a database as “synchronized” only ever “synchronizing”. Makes sense when you think about it! (Don’t let your customers get hung up on it).

Cannot queue a new build on Azure DevOps Server 2019.1 due to the way a SQL cluster was setup

I have recently been doing a TFS 2015 to Azure DevOps Server 2019.1 upgrade for a client. The first for a while, I have been working with Azure DevOps Service mostly of late. Anyway I saw an issue I had never seen before with any version of TFS, and I could find no information on the Internet.

The Problem

The error occurred when I tried to queue a new build after the upgrade, the build instantly failed with the error

‘The module being executed is not trusted, Either the owner of the database of the module need to be granted authenticate permission, or the module needs to be digitally signed. Warning: Null value is eliminated by an aggregate or other SET operation, The statement has been terminated’.

The Solution

It turns out the issue was the the client was using a enterprise wide SQL cluster to host the tfs_ databases. After the Azure DevOps upgrade the DBAs has enabled a trigger based logging system to monitor the databases and this was causing the error.

As soon as this logging was switched off everything worked as expected.

I would not recommend using such a logging tool for any ‘out the box’ database for a product such as TFS/Azure DevOps Server where the DBA team don’t own the database schema’s changes, as these databases will only occur if the product is upgraded

Release of my video on ‘Introduction to GitHub Actions’

I recently posted on my initial experiences with GitHub Actions. I had hoped to deliver a session on this subject a DDD 14 in Reading , I even got so far as to propose a session.

However, life happened and I found I could not make it to the event. So I decided to do the next best thing and recorded a video of the session. I event went as far as to try to get the ‘DDD event feel’ by recording in front of a ‘live audience’ at Black Marble’s offices.

A first look at GitHub Action – converting my Azure DevOps tasks to GitHub Actions

Introduction

GitHub Actions open up an interesting new way to provide CI/CD automation for your GitHub projects, other than the historic options of Jenkins, Bamboo, Team City, Azure DevOps Pipelines etc. No longer do you have to leave the realm of GitHub to create a powerful CI/CD process or provision a separate system.

For people familiar with Azure DevOps YAML based Pipelines you will notice some common concepts in GitHub Actions. However, GitHub Action’s YAML syntax is different and Actions are not Tasks.You can’t just re-use your old Azure DevOps tasks.

So my mind quickly went to the question ‘how much work is involved to allow me to re-use my Azure DevOps Pipeline Tasks?’. I know I probably won’t be moving them to Docker, but surely I can reuse my Node based ones somehow?

The Migration Process

The first thing to consider is ‘are they of any use?’.

Any task that used the Azure DevOps API was going to need loads of work, if even relevant on GitHub. But, my Versioning tasks seemed good candidates. They still needed some edits, such as the logic to extract a version number from a the build number needed to be removed. This is because GitHub Actions have no concept of build numbers (it is recommended that versioning is done using SemVer and branching).

Given all this I picked one for migration, my JSONVersioner

The first step was to create a new empty GitHub repo for my new Action. I did this using the JavaScript template and followed the Getting Started instructions. This allowed me to make sure I had a working starting point.

I then copied in my JSON file versioner task into the repo bit by bit

  • Renamed my entry ApplyVersionToJSONFile.ts file to main.ts to keep inline with the template standard
  • Copied over the AppyVersionToJSONFuncitons.js
  • I removed any Azure DevOps specific code that was not needed.
  • In both files swapped the references to “vsts-task-lib/task” to “@actions/core” and update the related function calls to use
    • core.getInput()
    • core.debug()
    • core.warning()
    • core.SetFailed()
  • Altered my handling of input variables defaults to use the GitHub Actions as opposed to Azure DevOps Pipeline variables (to find the current working folder)
  • Migrated the manifest from the task.json
    to the action.yaml
  • Updated the readme.md with suitable usage details.

And that was basically it, the Action just worked, I could call my Action from a test workflow in another GitHub repo

However, I did decided to do a bit more work

  • I moved my Mocha/Chai based tests over to use Jest, again to keep inline with the template example. This was actually the main area of effort for me. Jest runs it’s tests async, and this caused me problem with my temporary file handling that had to be reworked. I also took the chance to improve the tests handing of the JSON comparison, making it more resilient for cross platform testing.
  • I also added TSLint to the npm build process, something I do for all my TypeScript based projects to keep up code quality

Summary

So the basic act of migration from Azure DevOps Pipeline Extension to GitHub Actions is not that hard if you take it step by step.

The difficultly will be with what your tasks do, are they even relevant to GitHub Actions? And are any APIs you need available?

So migration of Azure DevOps Extension Tasks to GitHub Actions is not an impossible task, have a look at my source at JSONFileVersioner or in the actual task in the GitHub Marketplace with the usage

jobs:

build:

runs-on: ubuntu-latest

strategy:

matrix:

node-version: [12.x]

steps:

     - uses: actions/checkout@v1

    - uses: rfennell/JSONFileVersioner@v1

with:

Path: ''

Field: 'version'

FilenamePattern: '.json'

Recursion: 'true'

    - name: Use Node.js ${{ matrix.node-version }}

uses: actions/setup-node@v1

with:

node-version: ${{ matrix.node-version }}

    - name: npm install, build, and test

run: |

npm install

npm run build --if-present

npm test

env:

CI: true




There is a nice series of posts on Actions from Microsoft’s Abel Wang – Github Actions 2.0 Is Here!!!

Strange issue with multiple calls to the same REST WebClient in PowerShell

Hit a strange problem today trying to do a simple Work Item update via the Azure DevOps REST API.

To do a WI update you need to call the REST API

  • Using the verb PATCH
  • With the Header “Content-Type” set to “application/json-patch+json”
  • Include in the Body the current WI update revision (to make sure you are updating the current version)

So the first step is to get the current WI values to find the current revision.

So my update logic was along the lines of

  1. Create new WebClient with the Header “Content-Type” set to “application/json-patch+json”
  2. Do a Get call to API to get the current work item
  3. Build the update payload with my updated fields and the current revision.
  4. Do a PATCH call to API using the client created in step 1 to update the current work item

Problem was at Step 4 I got a 400 error. A general error, not too helpful

After much debugging I spotted the issue was that after Step 2. my WebClient’s Headers had changed, I had lost the content type – no idea why.

It all started to work if I recreated my WebClient after Step 2, so something like (in PowrShell)

Function Get-WebClient

{

param
(

[string]$pat,
[string]$ContentType = "application/json"

)

$wc = New-Object System.Net.WebClient
$pair = ":${password}"
$bytes = [System.Text.Encoding]::ASCII.GetBytes($pair)
$base64 = [System.Convert]::ToBase64String($bytes)
$wc.Headers.Add(“Authorization”,"Basic $base64")
$wc.Headers["Content-Type"] = $ContentType
$wc

}


function Update-WorkItemTitle {

param

(

$baseUri ,
$teamproject,
$workItemID,
$pat,
$title

)

$wc = Get-WebClient -pat $pat -ContentType "application/json-patch+json"
$uri = "$($baseUri)/$teamproject/_apis/wit/workitems/$($workItemID)?api-version=5.1"

# you can only update a work item if you also pass in the rev, this makes sure you are updating lastest version
$jsondata = $wc.DownloadString($uri) | ConvertFrom-Json

$wc = Get-WebClient -pat $pat -ContentType "application/json-patch+json"

$data = @(
@{
            op    = "test";
            path  = "/rev";
            value = $jsondata.Rev
},
@{
            op    = "add";
            path  = "/fields/System.Title";
            value = $title
}
) | ConvertTo-Json

$jsondata = $wc.UploadString($uri, "PATCH", $data) | ConvertFrom-Json
$jsondata

}

Authentication loops swapping organisations in Azure DevOps

I have recently been getting a problem swapping between different organisations in Azure DevOps. It happens when I swap between Black Mable ones and customer ones, where each is back by different Azure Active Directory (AAD) but I am using the same credentials; because I am either a member of that AAD or a guest.

The problem is I get into an authentication loop. It happens to be in Chrome, but you might find the same problem in other browsers.

It seems to be a recent issue, maybe related to MFA changes in AAD?

I used to be re-promoted for my ID when I swapped organisations in a browser tab, but not asked for further authentication

However, now the following happens

  • I login to an organisation without a problem e.g https://dev.azure.com/someorg using ID, password and MFA
  • In the same browser window, when I connect to another organisation e.g. https://dev.azure.com/someotherorg 
  • I am asked to pick an account, then there is the MFA challenge, but then go back to the login
  • …. and repeat.

The fix is to go in the browser tab to https://dev.azure.com. As you are already authenticated you will be able to sign out, then all is OK, you can login again.

The other options is to make even more use of Chrome People; one ‘person’ per customer, as opposed to my current usage on one ‘person’ per ID

You can’t use Azure DevOps Pipeline Gates to check services behind a firewall

I have recently be working on a release pipeline that deploys to a server behind a corporate firewall. This is done using an Azure DevOps private build agent and works fine.

As the service is a basic REST service and takes a bit of time to start-up I though a gate was a perfect way to pause the release pipeline until service was ready for the automated tests.

However, I hit a problem, the gates always failed as the internal server could not be resolved.

After a bit of thought I realised why. Gates as actual agentless tasks, they don’t run on the agent but on the server, so are outside the firewall. They could never connect to the private service without ports being opened, which was never going to happen.

So at this point in time I can’t use gates on this pipeline. Any similar logic to do the same job would have to be developed as scripts I can run on an agent.

Surface Pro 3 Type Cover Not Working After Windows 10 1903 Image Applied

Symptoms:

  • Following imaging with Windows 10 1903 using Configuration Manager OSD, the Type Cover doesn’t work at all (keyboard, trackpad).
  • When rebooting the machine, the keyboard and trackpad both work when in the BIOS.
  • When imaging the machine, both the keyboard and trackpad work in Windows PE.

The Surface Pro 3 was imaged and then patched up-to-date and the most recent Surface Pro 3 drivers available from Microsoft were applied, however the issue persisted.

To correct this issue, complete the following steps:

  1. Open Control Panel and navigate to ‘Hardware and Sound’ and then ‘Devices and Printers’.
  2. Select the Surface Type Cover and open the properties for this device. Select the ‘Hardware’ tab on the dialog:
    Surface Pro 3 Type Cover properties
  3. In turn, select each of the device functions shown in the list and click the ‘Properties’ button:
    Surface Pro 3 Type Cover devie function properties
  4. Click the ‘Change Settings’ button, then from the dialog that is shown select ‘Uninstall Device’. If offered the option to delete the driver software for this device, ensure that the checkbox to do so is selected (not all devices offer this option) and click ‘Uninstall’:
    Surface Pro 3 Type Cover uninstall device including driver
  5. Ensure this has been completed for all device functions shown in the list, then close the main properties dialog.
  6. Open the Device Manager for the computer, right-click the computer name at the top and select ‘Scan for Hardware Changes’.
  7. Expand the firmware section within Device Manager. For each of the items shown, right click the item and select ‘Update Driver’. Click ‘Search automatically for updated driver software’ from the dialog that is shown:
    Surface Pro 3 Type Cover update dirmware
    Note that if you’ve installed the latest Surface Pro 3 drivers, none of the firmware items shown are likely to be updated, but attempt to update each item. If you’ve not installed the latest drivers, the firmware list may have more generic titles which will be updated as the appropriate firmware is applied.
  8. Repeat the process of updating the driver for each item under the Keyboards section of the Device Manager. Note that even with the most recent driver pack installed, all of these entries on the device I was working on were the generic ‘HID Keyboard Device’. We don’t know which one of the keyboard devices listed is the Type Cover, however when you get to the correct one you’ll that the driver that is installed is listed as ‘Surface Type Cover Filter Device’:
    Surface Pro 3 Type Cover driver updated
  9. As soon as this driver is installed, the Type Cover should start working again. In my case no reboot was required.

Review of ‘Azure DevOps Server 2019 Cookbook’ – well worth getting

It always amazes me that people find time to write tech books whilst having a full time job. So given the effort I know it will have been, it is great to see an update to  Tarun Arora and Utkarsh Sigihalli’s book ‘Azure DevOps Server 2019 Cookbook’.

Azure DevOps Server 2019 Cookbook - Second Edition

I do like their format of ‘recipes’  that walk through common requirements. I find it particularly interesting that for virtually each recipes there is an associated Azure DevOps Extension that enhances the experience. It speaks well of the research the authors have done and the richness and variety of the 3rd party extensions in the Azure DevOps Marketplaces

I think because of this format there is something in this book for everyone, whether new to Azure DevOps Server 2019 or someone who has been around the product since the days of TFS 2005.

In my opinion, it is well worth having a copy on your shelf, whether physical or virtual