Getting the approver for release to an environment within an Azure DevOps Multi-Stage YAML pipeline

I recently had the need to get the email address of the approver of a deployment to an environment from within a multi-stage YAML pipeline. Turns out it was not as easy as I might have hoped given the available documented APIs.

Background

My YAML pipeline included a manual approval to allow deployment to a given environment. Within the stage protected by the approval, I needed the approver’s details, specifically their email address.

I managed to achieve this but had to use undocumented API calls. These were discovered by looking at Azure DevOps UI operations using development tools within my browser.

The Solution

The process was as follows

  • Make a call to the build’s timeline to get the current stage’s GUID – this is documented API call
  • Make a call to the Contribution/HierarchyQuery API to get the approver details. This is the undocumented API call.

The code to do this is as shown below. It makes use of predefined variables to pass in the details of the current run and stage.

Note that I had to re-create the web client object between each API call. If I did not do this I got a 400 Bad Request on the second API call – it took me ages to figure this out!

Fixing my SQLite Error 5: ‘database is locked’ error in Entity Framework

I have spent too long today trying to track down an intermittent “SQLite Error 5: ‘database is locked’” error in .Net Core Entity Framework.

I have read plenty of documentation and even tried swapping to use SQL Server, as opposed to SQLite, but this just resulted in the error ‘There is already an open DataReader associated with this Connection which must be closed first.’.

So everything pointed to it being a mistake I had made.

And it was, it turns out the issue was I had the dbContext.SaveChanges() call inside a foreach loop

It was

using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o => o.RequestedStartTime < DateTime.UtcNow && o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
        dbContext.SaveChanges();
    }
}

And it should have been

 using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o => o.RequestedStartTime < DateTime.UtcNow && o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
    }
    dbContext.SaveChanges();
}

Once this change was made my error disappeared.

What to do when moving your Azure DevOps organisation from one region to another is delayed.

There are good reasons why you might wish to move an existing Azure DevOps organisation from one region to another. The most common ones are probably:

  • A new Azure DevOps region has become available since you created your organisation that is a ‘better home’ for your projects.
  • New or changing national regulations require your source stored in a specific location.
  • You want your repositories as close to your workers as possible, to reduce network latency.

One of these reasons meant I recently had to move an Azure DevOps organisation, so followed the documented process. This requires you to

  1. Whilst logged in as the Azure DevOps organisation owner, open the Azure DevOps Virtual Support Agent
  2. Select the quick action ‘Change Organization Region’
  3. Follow the wizard to pick the new region and the date for the move.

You are warned that there could be a short loss of service during the move. Much of the move is done as a background process. It is only the final switch over that can interrupt service, hence this interruption being short.

I followed this process, but after the planned move date I found my organisation has not moved. In the Virtual Support Agent, I found the message.

Please note that region move requests are currently delayed due to ongoing deployments. We may not be able to perform the change at your requested time and may ask you to reschedule. We apologize for the potential delay and appreciate your patience!

I received no other emails, I suspect overly aggressive spam filters were the cause of that, but it meant I was unclear what to do next. Should I:

  1. Just wait i.e. do not reschedule anything, even though the target date is now in the past
  2. Reschedule the existing move request to a date in the future using the virtual assistant wizard
  3. Cancel the old request and start the process again from scratch

After asking the question in the Visual Studio Developer Community Forums I was told the correct action is to cancel the old request and request a new move date. It seems that once your requested date is passed the move will not take place no matter how long you wait.

Hence, I created a new request, which all went through exactly as planned.

Running UWP Unit Tests as part of an Azure DevOps Pipeline

I was reminded recently of the hoops you have to jump through to run UWP unit tests within an Azure DevOps automated build.

The key steps you need to remember are as follows

Desktop Interaction

The build agent should not be running as a service it must be able to interact with the desktop.

If you did not set this mode during configuration this post from Donovan Brown shows how to swap the agent over without a complete reconfiguration.

Test Assemblies

The UWP unit test projects are not built as a DLL, but as an EXE.

I stupidly just made my VSTest task look for the generated EXE and run the tests they contained. This does not work generating the somewhat confusing error

Test run will use DLL(s) built for framework .NETFramework,Version=v4.0 and platform X86. Following DLL(s) do not match framework/platform settings.
BlackMarble.Spectrum.FridgeManagement.Client.OneWire.UnitTests.exe is built for Framework .NETCore,Version=v5.0 and Platform X86.

What you should search for as the entry point for the tests is the .appxrecipe file. Once tI used this my tests ran.

So my pipeline YML to run all the tests in a built solutions was

- task: VisualStudioTestPlatformInstaller@1
   inputs:
      packageFeedSelector: 'nugetOrg'
      versionSelector: 'latestPreRelease'              

- task: VSTest@2
    displayName: 'VSTest - testAssemblies'
    inputs:
       platform: 'x86'
       configuration: '$(BuildConfiguration)'
       testSelector: 'testAssemblies' 
testAssemblyVer2: | # Required when testSelector == TestAssemblies
         **\*unittests.dll
        **\*unittests.build.appxrecipe
         !**\*TestAdapter.dll
         !**\obj\** 
       searchFolder: '$(Build.SourcesDirectory)/src'
       resultsFolder: '$(System.DefaultWorkingDirectory)\TestResults'
       runInParallel: false
       codeCoverageEnabled: true
       rerunFailedTests: false
       runTestsInIsolation: true
       runOnlyImpactedTests: false
        
- task: PublishTestResults@2

   displayName: 'Publish Test Results **/TEST-*.xml'
  condition: always()

Out of Memory running SonarQube Analysis on a large projects

Whilst adding SonarQube analysis to a large project I started getting memory errors during the analysis phase. The solution was to up the memory available to the SonarQube Scanner on the my build agent, not the memory on the SonarQube server as I had first thought. This is done with an environment variable as per the documentation, but how best to do this within our Azure DevOps build systems?

The easiest way to set the environment variable `SONAR_SCANNER_OPTS` on every build agent is to just set it via a Azure Pipeline variable. This works because the build agent makes all pipeline variables available as environment variables at runtime.

So as I was using YML Pipeline, I set a variable within the build job

job: build
timeoutInMinutes: 240
variables:
- name: BuildConfiguration
value: 'Release'
- name: SONAR_SCANNER_OPTS
value: -Xmx4096m
steps:

I found I had to quadruple the memory allocated to the scanner. Once this was done my analysis completed

Getting confused over Azure DevOps Pipeline variable evaluation

Introduction

The use of variables is important in Azure DevOps pipelines, especially when using YML templates. They allow a single pipeline to be used for multiple branches/configurations etc.

The most common form of variables you see is are the predefined built in variables e.g. $(Build.BuildNumber) and your own custom ones e.g. $(var). Usually the value of these variables are set before/as the build is run, as an input condition.

But this is not the only way variables can be used. As noted in the documentation there are different ways to access a variable…

In a pipeline, template expression variables ${{ variables.var }} get processed at compile time, before runtime starts. Macro syntax variables $(var) get processed during runtime before a task runs. Runtime expressions $[variables.var] also get processed during runtime but were designed for use with conditions and expressions.

Azure DevOps Documentation

99% of the time I have been fine using just the $(var) syntax, but I recently was working on a case where this would not work for me.

The Issue

I had a pipeline that made heavy use of YML templates and conditional task insertion to include sets of task based upon the manually entered and pre-defined variables.

The problems that one of the tasks, used in a template, set a boolean output variable $(outVar) by calling

echo '##vso[task.setvariable variable=outvar;isOutput=true]true'

This task created the output variable could be accessed by other tasks as the variable $(mytask.outvar), but it was set at runtime it not available at the time of the YML compilation.

This caused me a problem as it meant that it could not be used in the template’s conditional task inclusion blocks as it as not present art compile time when this code is evaluated e.g.

- ${{ if eq(mytask.outvar, 'true') }} :
  # the task to run if the condition is met
  - task: Some.Task@1 
    ....

I tied referencing the variable using all forms of $ followed by brackets syntax I could think of, but it did not help.

The lesson here is that you cannot make a runtime value a compile time value by wishing it to change.

The only solution I could find was to make use of the runtime variable in a place where it can be resolved. If you wish to enable or disable a task based on the variable value then the only option is to use the condition parameter

  # the task to run if the condition is met
  - task: Some.Task@1 
    condition: and(succeeded(), eq(mytask.outvar, 'true'))
    ....

The only downside of this way of working as opposed to the conditional insertion is that

  • If you conditional insertion, non required tasks are never shown in the pipeline as they are not compiled into it
  • If using the condition property to exclude a task, it will still appear in the log, but it can be seen that it has not been run.

So I got there in the end, it was just not as neat as I had hoped, but I do have a clearer understanding of compile and runtime variables in Azure DevOps YML

How to export Azure DevOps Classic Builds and Release to YAML

This is another one of those posts so I can remember where some useful information is….

If you are migrating your Azure DevOps Classic Builds and Release to Multi-Stage YAML then an import step is to export all the exiting build, task groups and release as YAML files.

You can do this by hand within the Pipeline UI, with a lot of cut and pasting, but much easier is to use the excellent Yamlizr – Azure DevOps Classic-to-YAML Pipelines CLI from Alex Vincent. A single CLI command exports everything with a Team project into a neat folder structure of template base YAML.

I cannot recommend the tool enough

Getting my ThinkPad Active Pen working with my Lenovo X1 Carbon Extreme

I have had a ThinkPad Active Pen (model SD60G957200) ever since I got my Lenovo X1 Carbon Extreme.

The pen, when it works, has worked well. However, the problem has been that whether the pen and PC detected each other seemed very hit and miss.

Today I found the root cause. It was not drivers or dodgy Bluetooth as I had thought, but a weak spring inside the pen. It was not so weak that the battery rattled, but weak enough that the electrical circuit was not being closed reliably on the battery.

The fix was to replace the weak spring with new one out of an old ball point pen. Once this was done the pen became instantly reliable.

Wish I had spotted that sooner.

Updated 11 Nov 2020: I may have spoken too soon, it is back to it’s old behaviour today 🙁

However, I think it could just be the AAAA battery. Seems it is not a good idea to leave a battery in when the pen is not is use given the pen has no power switch.

Using GitVersion when your default branch is not called ‘master’

The Black Live Matter movement has engendered many conversations, hopefully starting changes for the good. Often these changes involve the use of language. One such change has been the move to stop using the name master and switching to the name main for the trunk/default branch in Git repos. This change is moving apace driven by tools such as GitHub and Azure DevOps .

I have recently had need, for the first time since swapping my default branch name in new repos to main, to use Semantic Version and the GitVersion tool.

‘Out of the box’ I hit a problem. The current shipping version of GitVersion (5.3.2) by default makes the assumption that’s the trunk branch is called master. Hence, throws an exception if this branch cannot be found.

Looking at the project’s repo you can find PRs, tagged for a future release, that address this constraint. However, you don’t have to wait for a new version to ship to use this excellent tool in repos with other branch naming conventions.

The solution is to create an override file GitVersion.yml in the root of your repo with the following content to alter the Regex used to find branches. Note that the content below is as a minimum, you can override any other default configuration values in this file as needed.

branches:
master:
regex: ^master$|^main$

With this override file the default branch can be either master or main.

You can of course use a different name or limit the Regex to a single name as you need.

How do handle PRs for Azure DevOps YAML Pipelines if the YAML templates are in a different repo?

Azure DevOps YAML base pipelines allow the pipeline definitions to be treated like any other code. So you make changes in a branch and PR them into the main/trunk when they are approved.

This works well if all the YAML files are in the same repo, but not so well if you are using YAML templates and the templated YAML is stored in a different repo. This is because an Azure DevOps PR is limited to a single repo. So testing a change to a YAML template in a different repo needs a bit of thought.

Say for example you have a template called core.yml in a repo called YAMLTemplates and you make a change to it and start a PR. Unless you have a test YAML pipeline in that repo, which is not a stupid idea, but not always possible depending on the complexity of your build process, there is no way to test the change inside that repo.

The answer is to create a temporary branch in a repo that consumes the shared YAML template. In this temporary branch make an edit to the repository setting that references the shared YAML repo to point to the update branch contain the PR

resources: 
repositories:
  - repository: YAMLTemplates
   type: git
   name: 'Git Project/YAMLTemplates'
# defaults to ref: 'refs/heads/master'
ref: 'refs/heads/newbranch'

You don’t need to make any change to the line where the template is used

extends:  
template: core.yml@YAMLTemplates
  parameters:
    customer: ${{parameters.Customer}}
    useSonarQube: ${{parameters.useSonarQube}}

You can then use this updated pipeline to validated your PR. Once you are happy it works you can

  1. Complete the PR in the YAML Templates repo
  2. Delete the temporary branch in your consuming repo.