Fix For: ‘The pipeline is not valid error: Unable to resolve latest version’ on an Azure DevOps YAML pipeline

The Issue

I have an Azure DevOps multi-stage YAML pipeline that started giving the error `The pipeline is not valid error: Unable to resolve latest version for pipeline templates: this could be due to inaccessible pipeline or no version is available` and failing instantly.

image

The Solution

This is not the most helpful message, but after some digging I found the problem.

The pipeline used another pipeline as a resources

resources: 
pipelines:
- pipeline: templates
source: QueuesAndFunctionsDemo-CI
branch: master

This referenced build had failed, so there was no successful build resources to load, hence the error.

Once the problem with this reference build was fixed the error message went away and I could trigger my build

Exporting Release Notes and WIKIs as PDFs using a new Azure DevOps Extension that wrappers AzureDevOps.WikiPDFExport

A common question I get when people are using my Release Notes task for Azure DevOps is whether it is possible to get the release notes as a PDF.

In the past, the answer was that I did not know of any easy way. However, I have recently come across a command line tool by Max Melcher called AzureDevOps.WikiPDFExport that allows you to export a whole WIKI (or a single file) as a PDF. Its basic usage is

  • Clone a WIKI Repo
  • Run the command line tool passing in a path to the root of the cloned repo
  • The .order file is read
  • A PDF is generated

This is a nice and simple process, but it would be nice to be able to automate this process as part of a build pipeline.

After a bit of thought, I realised I had much of the code I needed to automated the process in my WIKIUpdater extension as these tasks are based around cloning repos.

So I am please to say I have just released a new Azure DevOps extension WikiPDFExport that wrappers Max’s command line tool. It does the following

  • Downloads the latest release of the WikiPDFExport tool from GitHub to the build agent (the exe is too big to include in the VSIX package)
  • Optionally clone a Git based WIKI repo. As with my WikIUpdater tasks, you can pass credentials for Azure DevOps or GitHub
  • Generate a PDF of a single file or the whole of a Wiki folder structure (based on the .order file) that was either cloned or was already present on the agent

A sample of the YAML usage of the task is as shown below. For full documentation see the extensions wiki pages for general usage and troubleshooting and the full YAML specification

- task: richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1 
  displayName: 'Export Single File generated by the release notes task'
  inputs:
    cloneRepo: false
    localpath: '$(System.DefaultWorkingDirectory)'
    singleFile: 'inline.md'
    outputFile: '$(Build.ArtifactStagingDirectory)\PDF\singleFile.pdf'
- task: richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1
   displayName: 'Export a public GitHub WIKI'
   inputs:
     cloneRepo: true
     repo: 'https://github.com/rfennell/AzurePipelines.wiki.git' 
     useAgentToken: false
     localpath: '$(System.DefaultWorkingDirectory)\GitHubRepo' 
     outputFile: '$(Build.ArtifactStagingDirectory)\PDF\publicGitHub.pdf'
- task: richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1
   displayName: 'Export a private Azure DevOps WIKI'
   inputs:
     cloneRepo: true
     repo: 'https://dev.azure.com/richardfennell/GitHub/_git/GitHub.wiki' 
     useAgentToken: true
     localpath: '$(System.DefaultWorkingDirectory)\AzRepo' 
     outputFile: '$(Build.ArtifactStagingDirectory)\PDF\Azrepo.pdf'

So hopefully this new extension will give teams another way to present their release notes, whether it be an export of a whole WIKI or just a single page.

Using the Post Build Cleanup Task from the Marketplace in YAML based Azure DevOps Pipelines

Disks filling up on our private Azure DevOps agents is a constant battle. We have maintenance jobs setup on the agent pools, to clean out old build working folders nightly, but these don’t run often enough. We need a clean out more than once a day due to the number and size of our builds.

To address this, with UI based builds, we successfully used the Post Build Cleanup Extension. However since we have moved many of our builds to YAML we found it not working so well. Turned out the problem was due to the way got source code.

The Post Build Cleanup task is intelligent, it does not just delete folders on demand. It check to see what the Get Source ‘Clean’ setting was when the repo was cloned and bases what it deletes on this value e.g. nothing, source, or everything. This behaviour is not that obvious.

In a UI based builds it is easy to check this setting. You are always in the UI when editing the build. However, in YAML it is easy to forget the setting, as it is one of those few values that cannot be set in YAML.

To make the post build cleanup task actually delete folders in a YAML pipeline you need to

  1. Edit the pipeline
  2. Click the ellipse menu top right
  3. Pick Triggers
  4. Pick YAML and select the ‘Get Source’ block
  5. Make sure the ‘Clean’ setting is set to ‘true’ and the right set of items to delete are selected – if this is not done the post clean up task does nothingimage
  6. You can then add the post build cleanup task the end of the steps
steps:
  - script: echo This where you do stuff
  - task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup@3
    displayName: 'Clean Agent Directories'
    condition: always()

Once this is done it behaves as expected

Zwift and the joys of home networking

During the Covid 19 lock down I have been doing plenty of Zwift‘ing. However, I have started having problems getting the Zwift Companion App working reliably, when it used to work.

Basically, Zwift itself was fine, though very slow to save when exiting, but the companion app could not seem to detect that I was actively Zwift’ing, but it’s other functions were OK.

After much fiddling I found the issue was the network connection from my PC up to Zwift and nothing to do with the phone app. But in case it is of any use to others here are the steps I took to

  • Ran a WiFi network analysis app and realised that
    • My local wireless environment is now very congested, I assume as more people are working from home.
    • Both the 2.4GHz and 5Ghz network were on the same channels as other strong signals.
    • Also they were using the same SSID, which is meant to provide seamless swap-over between 2.4 and 5Ghz. But, in reality this meant there were connection problems as a connection flipped between frequencies.

This explained other problems I had seem

  • The Microsoft Direct Access VPN I use to connect to the office failing intermittently. Obviously, any problems I have connecting to the office to do work is far less important than Zwift connection issues.
  • My Samsung phone would drop calls for no reason. I now think this was when it had decided to use Wifi calling and got confused over networks.
    Note: I had fixed this by switching off Wifi calling.

To address the problems I changed the SSIDs so that my 2.4 and 5Ghz networks had different names, so that I know which one I was using. Also I moved the channels to ones not used by my neighbours

TestResult
Put the phone and the PC on the 2.4Ghz networkNo improvement, app did not work and PC slow to save
Put the phone and the PC on 5GhzSmall improvement, app still did not work but at least tried to show the in game view before it dropped out. The PC was still slow to save
Put the phone on either Wifi network but the the PC on Ethernet over Power using TPLink adaptorsThis fixed it

So it seems the problem was upload speed from my PC all along. Strange as I would have expected the 5Ghz network to be fine, even if the 2.4Ghz was not. The 5Ghz Wifi seems to perform OK on a speed test.

Anyway it is working now, but maybe it is time to consider a proper mesh network?

Timeout Errors ‘Extracting Schema’ when running SQLPackage for a Migration to Azure DevOps Services

The Problem

Whilst doing a migration from an on-premised TFS to Azure DevOps Services for a client I had a strange issue with SQLPackage.exe.

I had previously completed the dry run of the migration without any issues and started the live migration with a fully defined process and timings for a each stage.

When I came to export the detached Team Project Collection DB I ran the same command as I had for the dry run

& "C:\Program Files\Microsoft SQL Server\150\DAC\bin\SqlPackage.exe" /sourceconnectionstring:”Data Source=localhost\SQLExpress;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True” /targetFile:C:\temp\Tfs_DefaultCollection.dacpac /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory 

I had expected this to take around 30 minutes. However it failed after 10 minutes with an error when trying to export the schema ‘Timeout, cannot reconnect to the database’.

This was strange as nothing had changed on the system since the dry-run. I tried all of the following with no effect

  • Just running the command again, you can hope!
  • Restarted SQL and ran the command again
  • Tried the export from SQL Management Studio as opposed to the command line , this just seems to hang at the same point.

The Solution

What resolved the problem was a complete reboot of the virtual machine. I assume the issue was some locked resource, but not idea why.


Updated 29th July 2020

I had the same problem with another client upgrade. This time a reboot did not fix it.

The solution at this site was to upgrade SQLPackage from the 140 32bit version to the 64bit 150 version. Once this was done the command ran without a problem

Fix for ‘System.BadImageFormatException’ when running x64 based tests inside a Azure DevOps Release

This is one of those blog posts I write to remind my future self how I fixed a problem.

The Problem

I have a release that installs VSTest and runs some integration tests that target .NET 4.6 x64. All these tests worked fine in Visual Studio. However, I got the following errors for all tests when they were run in a release

2020-04-23T09:30:38.7544708Z vstest.console.exe "C:\agent\_work\r1\a\PaymentServices\drop\testartifacts\PaymentService.IntegrationTests.dll"

2020-04-23T09:30:38.7545688Z /Settings:"C:\agent\_work\_temp\uxykzf03ik2.tmp.runsettings"

2020-04-23T09:30:38.7545808Z /Logger:"trx"

2020-04-23T09:30:38.7545937Z /TestAdapterPath:"C:\agent\_work\r1\a\PaymentServices\drop\testartifacts"

2020-04-23T09:30:39.2634578Z Starting test execution, please wait...

2020-04-23T09:30:39.4783658Z A total of 1 test files matched the specified pattern.

2020-04-23T09:30:40.8660112Z   X Can_Get_MIDs [521ms]

2020-04-23T09:30:40.8684249Z   Error Message:

2020-04-23T09:30:40.8684441Z    Test method PaymentServices.IntegrationTests.ControllerMIDTests.Can_Get_MIDs threw exception:

2020-04-23T09:30:40.8684574Z System.BadImageFormatException: Could not load file or assembly 'PaymentServices, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format.

2020-04-23T09:30:40.8684766Z   Stack Trace:

2020-04-23T09:30:40.8684881Z       at PaymentServices.IntegrationTests.ControllerMIDTests.Can_Get_MIDs()

2020-04-23T09:30:40.9038788Z Results File: C:\agent\_work\_temp\TestResults\svc-devops_SVRHQAPP027_2020-04-23_10_30_40.trx

2020-04-23T09:30:40.9080344Z Total tests: 22

2020-04-23T09:30:40.9082348Z      Failed: 22

2020-04-23T09:30:40.9134858Z ##[error]Test Run Failed.

Solution

I needed to tell vstest.console.exe to run x64 as opposed to it’s default of x32. This can be done with a command line override –platform:x64

image

Swapping my Azure DevOps Pipeline Extensions release process to use Multistage YAML pipelines

In the past I have documented the build and release process I use for my Azure DevOps Pipeline Extensions and also detailed how I have started to move the build phases to YAML.

Well now I consider that multistage YAML pipelines are mature enough to allow me to do my whole release pipeline in YAML, hence this post.

image

My pipeline performs a number of stages, you can find a sample pipeline here. Note that I have made every effort to extract variables into variable groups to aid reuse of the pipeline definition. I have added documentation as to where variable are stored and what they are used for.

The stages are as follows

Build

The build phase does the following

  • Updates all the TASK.JSON files so that the help text has the correct version number
  • Calls a YAML template (build-Node-task) that performs all the tasks to transpile a TypeScript based task – if my extension contained multiple tasks this template would be called a number of time
    • Get NPM packages
    • Run Snyk to check for vulnerabilities – if any vulnerabilities are found the build fails
    • Lint and Transpile the TypeScript – if any issue are found the build fails
    • Run any Unit test and publish results – if any test fail the build fails
    • Package up the task (remove dev dependencies)
  • Download the TFX client
  • Package up the Extension VSIX package and publish as a pipeline artifact.

Private

The private phase does the following

  • Using another YAML template (publish-extension) publish the extension to the Azure DevOps Marketplace, but with flags so it is private and only assessible to my account for testing
    • Download the TFX client
    • Publishes the Extension to the Marketplace

This phase is done as a deployment job and is linked to an environment,. However, there are no special approval requirements are set on this environment. This is because I am happy for the release to be done to the private instance assuming the build phase complete without error.

Test

This is where the pipeline gets interesting. The test phase does the following

  • Runs any integration tests. These could be anything dependant on the extension being deployed. Unfortunately there is no option at present in multistage pipeline for a manual task to say ‘do the manual tests’, but you could simulate similar by sending an email or the like.

The clever bit here is that I don’t want this stage to run until the new private version of the extension has been published and is available; there can be a delay between TFX saying the extension is published and it being downloadable by an agent. This can cause a problem in that you think you are running tests against a different version of the extension to one you have. To get around this problem I have implemented a check on the environment this stage’s deployment job is linked to. This check runs an Azure Function to check the version of the extension in the Marketplace. This is exactly the same Azure Function I already used in my UI based pipelines to perform the same job.

The only issue here is that this Azure Function is used as an exit gate in my UI based pipelines; to not allow the pipeline to exit the private stage until the extension is publish. I cannot do this in a multistage YAML pipeline as environment checks are only done on entry to the environment. This means I have had to use an extra Test stage to associate the entry check with. This was setup as follows

  • Create a new environment
  • Click the ellipse (…) and pick ‘approvals and checks’
  • Add a new Azure Function check
  • Provide the details, documented in my previous post, to link to your Azure Function. Note that you can, in the ’control options’ section of the configuration, link to a variable group. This is a good place to store all the values, you need to provide
    • URL of the Azure Function
    • Key to us the function
    • The function header
    • The body – this one is interesting. You need to provide the build number and the GUID of a task in the extension for my Azure Function. It would be really good if both of these could be picked up from the pipeline trying to use the environment. This would allow a single ‘test’ environment to be created for use by all my extensions, in the same way there are only a single ‘private’ and ‘public’ environment. However, there is a problem, the build number is picked up OK, but as far as I can see I cannot access custom pipeline variables, so cannot get the task GUID I need dynamically. I assume this is because this environment entry check is run outside of the pipeline. The only solution  can find is to place the task GUID as a hard coded value in the check declaration (or I suppose in the variable group). The downside of this is it means I have to have an environment dedicated to each extension, each with a different task GUID. Not perfect, but not too much of a problem
    • In the Advanced check the check logic
    • In control options link to the variable group contain any variables used.

Documentation

The documentation stage again uses a template (generate-wiki-docs) and does the following

Public

The public stage is also a deployment job and linked to an environment. This environment has an approval set so I have to approve any release of the public version of the extension.

As well as doing the same as private stage this stage does the following

Summary

It took a bit of trial and error to get this going, but I think I have a good solution now. The fact that the bulk of the work is done using shared templates means I should get good reuse of the work I have done. I am sure I will be able to improve the template as time goes on but it is a good start

My Azure DevOps Pipeline is not triggering on a GitHub Pull request – fixed

I have recently hit a problem that some of my Azure DevOps YAML pipelines, that I use to build my Azure DevOps Pipeline Extensions, are not triggering on a new PR being created on GitHub.

I did not get to the bottom of why this is happening, but I found a fix.

  • Check and of make a note of any UI declared variables in your Azure DevOps YAML Pipeline that is not triggering
  • Delete the pipeline
  • Re-add the pipeline, linking to the YAML file hosted on GitHub. You might be asked to re-authorise the link between Azure DevOps Pipelines and GitHub.
  • Re-enter any variables that are declared via the Pipelines UI and save the changes

Your pipeline should start to be triggered again

Experiences setting up Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise

Background

GitHub is a great system for individuals and OSS communities for both public and private project. However, corporate customers commonly want more control over their system than the standard GitHub offering. It is for this reason GitHub offers  GitHub Enterprise.

For most corporates, the essential feature that GitHub Enterprise offers is the use Single Sign On (SSO) i.e. allowing users to login to GitHub using their corporate directory accounts.

I wanted to see how easy this was to setup when you are using Azure Active Directory (AAD).

Luckily there is a step by step tutorial from Microsoft on how to set this up. Though, I would say that though detailed this tutorial has a strange structure in that it shows the default values not the correct values. Hence, the tutorial requires close reading, don’t just look at the pictures!

Even with close reading, I still hit a problem, all of my own making, as I went through this tutorial.

The Issue – a stray / in a URL

I entered all the AAD URLs and certs as instructed (or so I thought) by the tutorial into the Security page of GitHub Enterprise.

When I pressed the ‘Validate’ button in GitHub, to test the SSO settings, I got an error

‘The client has not listed any permissions for ‘AAD Graph’ in the requested permissions in the client’s application registration’

This sent me shown a rabbit hole looking at user permissions. That wasted a lot of time.

However, it turns out the issue was that I had a // in a URL when it should have been a  /. This was because I had made a cut and paste error when editing the tutorial’s sample URL and adding my organisation details.

Once I fixed this typo the validation worked, I was able to complete the setup and then I could to invite my AAD users to my GitHub Enterprise organisation.

Summary

So the summary is, if you follow the tutorial setting up SSO from AAD to GitHub Enterprise is easy enough to do, just be careful of over the detail.

Where did all my test results go?

Problem

I recently tripped myself up whist adding SonarQube analysis to a rather complex Azure DevOps build.

The build has two VsTest steps, both were using the same folder for their test result files. When the first VsTest task ran it created the expected .TRX and .COVERAGE files and then published its results to Azure DevOps, but when the second VsTest task ran it over wrote this folder, deleting the files already present, before it generated and published it results.

This meant that the build itself had all the test results published, but when SonarQube looked for the files for analysis only the second set of test were present, so its analysis was incorrect.

Solution

The solution was easy, use different folders for each set of test results.

This gave me a build, the key items are shown below, where one VsTest step does not overwrite the previous results before they can be processed by any 3rd party tasks such as SonarQube.

steps:
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare@4
   displayName: 'Prepare analysis on SonarQube'
   inputs:
     SonarQube: Sonarqube
     projectKey: 'Services'
     projectName: 'Services'
     projectVersion: '$(major).$(minor)'
     extraProperties: |
      # Additional properties that will be passed to the scanner,
      sonar.cs.vscoveragexml.reportsPaths=$(System.DefaultWorkingDirectory)/**/*.coveragexml
      sonar.cs.vstest.reportsPaths=$(System.DefaultWorkingDirectory)/**/*.trx


… other build steps


- task: VSTest@2
   displayName: 'VsTest – Internal Services'
   inputs:
     testAssemblyVer2: |
      **\*.unittests.dll
      !**\obj\**
     searchFolder: '$(System.DefaultWorkingDirectory)/src/Services'
     resultsFolder: '$(System.DefaultWorkingDirectory)\TestResultsServices'
     overrideTestrunParameters: '-DeploymentEnabled false'
     codeCoverageEnabled: true
     testRunTitle: 'Services Unit Tests'
     diagnosticsEnabled: True
   continueOnError: true

- task: VSTest@2
   displayName: 'VsTest - External'
   inputs:
     testAssemblyVer2: |
      **\*.unittests.dll
      !**\obj\**
     searchFolder: '$(System.DefaultWorkingDirectory)/src/ExternalServices'
     resultsFolder: '$(System.DefaultWorkingDirectory)\TestResultsExternalServices'
     vsTestVersion: 15.0
     codeCoverageEnabled: true
     testRunTitle: 'External Services Unit Tests'
     diagnosticsEnabled: True
   continueOnError: true

- task: BlackMarble.CodeCoverage-Format-Convertor-Private.CodeCoverageFormatConvertor.CodeCoverage-Format-Convertor@1
   displayName: 'CodeCoverage Format Convertor'
   inputs:
     ProjectDirectory: '$(System.DefaultWorkingDirectory)'

- task: SonarSource.sonarqube.6D01813A-9589-4B15-8491-8164AEB38055.SonarQubeAnalyze@4
   displayName: 'Run Code Analysis'

- task: SonarSource.sonarqube.291ed61f-1ee4-45d3-b1b0-bf822d9095ef.SonarQubePublish@4
   displayName: 'Publish Quality Gate Result'