The importance of blogging – or how to do your future self a favour

Yesterday, yet again, I was thankful for my past self taking time to blog about a technical solution I had found.

I had an error when trying to digitally sign a package. On searching on the error code I came across my own blog post with the solution. This was, as usual, one I had no recollection of writing.

I find this happens all the time. It is a little disturbing when you search for an issue and the only reference is to a post you made and have forgotten, so you are the defacto expert, nobody knows anymore on the subject, but better than having no solution.

Too often I ask people if they have documented the hints, tips and solutions they find and the response I get is ‘I will remember’. Trust me you won’t. Write something down where it is discoverable for your team and your future self. This can be any format that works for you: an Email, OneNote, a Wiki or the one I find most useful a blog. Just make sure it is easily searchable.

Your future self will thank you.

Using Azure DevOps Stage Dependency Variables with Conditional Stage and Job Execution

I have been doing some work with Azure DevOps multi-stage YAML pipelines using stage dependency variables and conditions. They can get confusing quickly, you need one syntax in one place and another elsewhere.

So, here are a few things I have learnt…

What are stage dependency variables?

Stage Dependencies are the way you define which stage follows another in a multi-stage YAML pipeline. This is as opposed to just relying on the order they appear in the YAML file, the default order. Hence, they are critical to creating complex pipelines.

Stage Dependency variables are the way you can pass variables from one stage to another. Special handling is required, as you can’t just use the ordinary output variables (which are in effect environment variables on the agent) as you might within a job as there is no guarantee the stages and jobs are running on the same agent.

For stage dependency variables, is not how you create output variables, that does not differ from the standard manner, the difference is in how you retrieve them.

In my sample, I used a BASH script to set the output variable based on a parameter passed into the pipeline, but you can create output variables using scripts or tasks

  - stage: SetupStage
    displayName: 'Setup Stage'
    jobs:
      - job: SetupJob
        displayName: 'Setup Job'
        steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing " being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo "##vso[task.setvariable variable=MyVar;isOutput=true]${{parameters.value}}"
            name: SetupStep
            displayName: 'Setup Step'

Possible ways to access a stage dependency variable

There are two basic ways to access stage dependency variables, both using array objects

stageDependencies.STAGENAME.JOBNAME.outputs['STEPNAME.VARNAME']
dependencies.STAGENAME.outputs['JOBNAME.STEPNAME.VARNAME']

Which one you use, in which place, and whether via a local alias is the complexity

How to access a stage dependency in a script?

To access a stage dependency variable in a script, or a task, there are two key requirements

  • The stage containing the consuming job and hence script/task, must be set as dependant on the stage that created the output variable
  • You have to declare a local alias for the value in the stageDependencies array within the consuming stage. This local alias will be used as the local name by scripts and tasks

Once this is configured you can access the variable like any other local YAML variable

  - stage: Show_With_Dependancy
    displayName: ‘Show Stage With dependancy’
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs[‘SetupStep.MyVar’]]
    jobs:
      - job: Job
        displayName: ‘Show Job With dependancy’
        steps:
        - bash: |
              echo “localMyVarViaStageDependancies - $(localMyVarViaStageDependancies)”

Tip: If you are having a problem with the value not being set for a stage dependency variable look in the pipeline execution log, at the job level, and check the ‘Job preparation parameters’ section to see what is being evaluated. This will show if you are using the wrong array object, or have a typo, as any incorrect declarations evaluate as null

How to use a stage dependency as a stage condition

You can use stage dependency variables as controlling conditions for running a stage. In this use-case you use the dependencies array and not the stagedependencies used when aliasing variables.

  - stage: Show_With_Dependancy_Condition
    condition: and (succeeded(), eq (dependencies.SetupStage.outputs['SetupJob.SetupStep.MyVar'], 'True'))
    displayName: 'Show Stage With dependancy Condition'

From my experiments for this use-case, you don’t seem to need the DependsOn entry to decare the stage that exposed the output variable for this to work. So, this is very useful for complex pipelines where you want to skip a later stage based on a much earlier stage for which there is no direct dependency.

A side effect of using a stage condition is that many subsequent stages have to have their execution conditions edited as you cannot rely on the default completion stage state succeeded. This is because the prior stages could now be succeeded or skipped. Hence all following stages need to use the condition

condition: and( not(failed()), not(canceled()))

How to use a stage dependency as a job condition

To avoid the need to alter all the subsequent stage’s execution conditions you can set a condition at the job or task level. Unlike setting the condition at that stage level, you have to create a local alias (see above) and check the condition on that

  - stage: Show_With_Dependancy_Condition_Job
    displayName: 'Show Stage With dependancy Condition'
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs['SetupStep.MyVar']]
    jobs:
      - job: Job
        condition: and (succeeded(),
          eq (variables.localMyVarViaStageDependancies, 'True'))
        displayName: 'Show Job With dependancy'

This technique will work for both Agent-based and Agent-Less (Server) jobs

A warning though, if your job makes use of an environment with a manual approval, the environment approval check is evaluated before the job condition. This is probably not what you are after, so if using conditions with environments that use manual approvals then the condition is probably best set at the stage level, with the knock-on issues of states of subsequent stages as mentioned above.

An alternative, if you are just using the environment for manual approval, is to look at using an AgentLess job with a manual approval. AgentLess job manual approvals are evaluated after the job condition, so do not suffer the same problem.

If you need to use a stage dependency variable in a later stage, as a job condition or script variable, but do not wish to add a direct dependency between the stages, you could consider ‘republishing’ the variable as an output of the intermedia stage(s)

  - stage: Intermediate_Stage
    dependsOn:
      - SetUpStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs['SetupStep.MyVar']]
    jobs:
      - job: RepublishMyVar
       steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing " being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo "##vso[task.setvariable variable=MyVar;isOutput=true]$( localMyVarViaStageDependancies)"
            name: RepublishStep

Summing Up

So I hope this post will help you, and the future me, navigate the complexities of stage variables

You can find the YAML for the test harness I have been using in this GitHub GIST

Setting Azure DevOps ‘All Repositories’ Policies via the CLI

The Azure DevOps CLI provides plenty of commands to update Team Projects, but it does not cover all things you might want to set. A good example is setting branch policies. For a given repo you can set the policies using the Azure Repo command eg:

az repos policy approver-count update --project <projectname> --blocking true --enabled true --branch main --repository-id <guid> --minimum-approver-count w --reset-on-source-push true  --creator-vote-counts false --allow-downvotes false    

However, you hit a problem if you wish to set the ‘All Repositories’ policies for a Team Project. The issue is that the above command requires a specific –project parameter.

I can find no way around this using any published CLI tools, but using the REST API there is an option.

You could of course check the API documentation to work out the exact call and payload. However, I usually find it quicker to perform the action I require in the Azure DevOps UI and monitor the network traffic in the browser developer tools to see what calls are made to the API.

Using this technique, I have created the following script that sets the All Repositories branch policies.

Note that you can use this same script to set a specific repo’s branch policies by setting the repositoryId in the JSON payloads.

The case of the self-cancelling Azure DevOps pipeline

The Issue

Today I came across a strange issue with a reasonably old multi-stage YAML pipeline, it appeared to be cancelling itself.

The Build stage ran OK, but the Release stage kept being shown as cancelled with a strange error. The strangest thing was it did not happen all the time. I guess this is the reason the problem had not been picked up sooner.

If I looked at the logs for the Release stage, I saw that the main job, and meant to be the only job, had completed successfully. But I had gained an extra unexpected job that was being cancelled in 90+% of my runs.

This extra job was trying to run on an Ubuntu hosted agent and failing to make a connection. All very strange as all the jobs were meant to be using private Windows-based agents.

The Solution

Turns out, as you might expect, the issue was a typo in the YAML.

- stage: Release
  dependsOn: Build
  condition: succeeded()
  jobs:
  - job:
  - template: releasenugetpackage.yml@YAMLTemplates
    parameters:

The problem was the stray job: line. This was causing the attempt to connect to a hosted agent and then check out the code. Interesting a hosted Ubuntu agent was requested given there was no Pool defined

As soon as the extra line was removed the problems went away.

Making SonarQube Quality Checks a required PR check on Azure DevOps

This is another of those posts to remind me in the future. I searched the documentation for this answer for ages and found nothing, eventually getting the solution by asking on the SonarQube Forum

When you link SonarQube into an Azure DevOps pipeline that is used from branch protection the success, or failure, of the PR branch analysis is shown as an optional PR Check

The question was ‘how to do I make it a required check?’. Turns out the answer is to add an extra Azure DevOps branch policey status check for the ‘SonarQube/quality gate’

When you press the + (add) button it turns out the ‘SonarQube/quality gate’ is available in the drop-down

Once this change was made, the SonarQube Quality Check becomes a required PR Check.

How I dealt with a strange problem with PSRepositories and dotnet NuGet sources

Background

We regularly re-build our Azure DevOps private agents using Packer and Lability, as I have posted about before.

Since the latest re-build, we have seen all sorts of problems. All related to pulling packages and tools from NuGet based repositories. Problems we have never seen with any previous generation of our agents.

The Issue

The issue turned out to be related to registering a private PowerShell repository.

$RegisterSplat = @{
Name = 'PrivateRepo'
SourceLocation = 'https://psgallery.mydomain.co.uk/nuget/PowerShell'
PublishLocation = 'https://psgallery.mydomain.co.uk/nuget/PowerShell'
InstallationPolicy = 'Trusted'
}

Register-PSRepository @RegisterSplat

Running this command caused the default dotnet NuGet repository to be unregistered i.e. the command dotnet nuget list source was expected to return

Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  nuget.org [Enabled]
      https://www.nuget.org/api/v2/
  3.  Microsoft Visual Studio Offline Packages [Enabled]
      C:Program Files (x86)Microsoft SDKsNuGetPackages

But it returned

Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  Microsoft Visual Studio Offline Packages [Enabled]
      C:Program Files (x86)Microsoft SDKsNuGetPackages

The Workaround

You can’t call this a solution, as I cannot see why it is really needed, but the following command does fix the problem

 dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org

Automating the creation of Team Projects in Azure DevOps

Creating a new project in Azure DevOps with your desired process template is straightforward. However, it is only the start of the job for most administrators. They will commonly want to set up other configuration settings such as branch protection rules, default pipelines etc. before giving the team access to the project. All this administration can be very time consuming and of course prone to human error.

To make this process easier, quicker and more consistent I have developed a process to automated all of this work. It uses a mixture of the following:

A sample team project that contains a Git repo containing the base code I want in my new Team Project’s default Git repo. In my case this includes

  • An empty Azure Resource Management (ARM) template
  • A .NET Core Hello World console app with an associated .NET Core Unit Test project
  • A YAML pipeline to build and test the above items, as well as generating release notes into the Team Project WIKI

A PowerShell script that uses both az devops and the Azure DevOps REST API to

  • Create a new Team Project
  • Import the sample project Git repo into the new Team Project
  • Create a WIKI in the new Team Project
  • Add a SonarQube/SonarCloud Service Endpoint
  • Update the YAML file for the pipeline to point to the newly created project resources
  • Update the branch protection rules
  • Grant access privaledges as needed for service accounts

The script is far from perfect, it could do much more, but for me, it does the core requirements I need.

You could of course enhance it as required, removing features you don’t need and adding code to do jobs such as adding any standard Work Items you require at the start of a project. Or altering the contents of the sample repo to be cloned to better match your most common project needs.

You can find the PowerShell script in AzureDevOpsPowershell GitHub repo, hope you find it useful.

Getting the approver for release to an environment within an Azure DevOps Multi-Stage YAML pipeline

I recently had the need to get the email address of the approver of a deployment to an environment from within a multi-stage YAML pipeline. Turns out it was not as easy as I might have hoped given the available documented APIs.

Background

My YAML pipeline included a manual approval to allow deployment to a given environment. Within the stage protected by the approval, I needed the approver’s details, specifically their email address.

I managed to achieve this but had to use undocumented API calls. These were discovered by looking at Azure DevOps UI operations using development tools within my browser.

The Solution

The process was as follows

  • Make a call to the build’s timeline to get the current stage’s GUID – this is documented API call
  • Make a call to the Contribution/HierarchyQuery API to get the approver details. This is the undocumented API call.

The code to do this is as shown below. It makes use of predefined variables to pass in the details of the current run and stage.

Note that I had to re-create the web client object between each API call. If I did not do this I got a 400 Bad Request on the second API call – it took me ages to figure this out!

Fixing my SQLite Error 5: ‘database is locked’ error in Entity Framework

I have spent too long today trying to track down an intermittent “SQLite Error 5: ‘database is locked’” error in .Net Core Entity Framework.

I have read plenty of documentation and even tried swapping to use SQL Server, as opposed to SQLite, but this just resulted in the error ‘There is already an open DataReader associated with this Connection which must be closed first.’.

So everything pointed to it being a mistake I had made.

And it was, it turns out the issue was I had the dbContext.SaveChanges() call inside a foreach loop

It was

using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o => o.RequestedStartTime < DateTime.UtcNow && o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
        dbContext.SaveChanges();
    }
}

And it should have been

 using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o => o.RequestedStartTime < DateTime.UtcNow && o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
    }
    dbContext.SaveChanges();
}

Once this change was made my error disappeared.

What to do when moving your Azure DevOps organisation from one region to another is delayed.

There are good reasons why you might wish to move an existing Azure DevOps organisation from one region to another. The most common ones are probably:

  • A new Azure DevOps region has become available since you created your organisation that is a ‘better home’ for your projects.
  • New or changing national regulations require your source stored in a specific location.
  • You want your repositories as close to your workers as possible, to reduce network latency.

One of these reasons meant I recently had to move an Azure DevOps organisation, so followed the documented process. This requires you to

  1. Whilst logged in as the Azure DevOps organisation owner, open the Azure DevOps Virtual Support Agent
  2. Select the quick action ‘Change Organization Region’
  3. Follow the wizard to pick the new region and the date for the move.

You are warned that there could be a short loss of service during the move. Much of the move is done as a background process. It is only the final switch over that can interrupt service, hence this interruption being short.

I followed this process, but after the planned move date I found my organisation has not moved. In the Virtual Support Agent, I found the message.

Please note that region move requests are currently delayed due to ongoing deployments. We may not be able to perform the change at your requested time and may ask you to reschedule. We apologize for the potential delay and appreciate your patience!

I received no other emails, I suspect overly aggressive spam filters were the cause of that, but it meant I was unclear what to do next. Should I:

  1. Just wait i.e. do not reschedule anything, even though the target date is now in the past
  2. Reschedule the existing move request to a date in the future using the virtual assistant wizard
  3. Cancel the old request and start the process again from scratch

After asking the question in the Visual Studio Developer Community Forums I was told the correct action is to cancel the old request and request a new move date. It seems that once your requested date is passed the move will not take place no matter how long you wait.

Hence, I created a new request, which all went through exactly as planned.