Don’t skimp on resources for GHES for demo instances

I wanted to have a look at some GitHub Enterprise Server (GHES) upgrade scenarios so decided to create a quick GHES install on my local test Hyper-V instance. Due to me skimping on resources, and making a typo, creating this instance was much harder than it should have been.

The first issue was I gave it a tiny data disk, this was down to me making a typo in my GB to Bytes conversion when specifying the size. Interestingly, the GHES setup does not initially complain but sits on the ‘reloading system services’ stage until it times out. If you check the /setup/config.log you see many Nomad related 500 errors. A reboot of the VM showed the real problem, the log then showed plenty of out-of-disk space messages.

reloading system devices does take a while

The easiest fix was to just start again with a data disk of a reasonable size

I next hit the problems due to my skimping on resources. I am not sure why I chose to limit them, old habits of using systems with scarce resources I guess.

I had only given the VM 10Gb of memory and 1 CPU. The Hyper-V host was not production-grade, but could certainly supply more than that.

  • The lack of at least 14Gb causes the GHES to fail to boot with a nice clear error message
  • The single CPU meant the ‘reloading application services’ step fails, the /setup/config.log shows the message
Task Group "treelights" (failed to place 1 allocation):
* Resources exhausted on 1 nodes
* Dimension "cpu" exhausted on 1 nodes

As soon as I stopped the VM and provided 14Gb of memory and multiple vCPU to the VM instance and rebooted the setup completed as expected.

So the top tip is to read the GHES systems requirements and actually follow them, even if it is just a test/demo instance.

Fix for cannot ‘TypeError: Cannot read property’ when Dependabot submits a PR to upgrade a Jest Module

GitHub’s Dependabot is a great tool to help keep your dependencies up to date, and most of the time the PR it generates just merges without a problem. However, sometimes there are issues with other related dependencies.

This was the case with a recent PR to update jest-circus to 28.x. The PR failed with the error

TypeError: Cannot read property ‘enableGlobally’ of undefined at jestAdapter (node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:39:25) at TestScheduler.scheduleTests (node_modules/@jest/core/build/TestScheduler.js:333:13) at runJest (node_modules/@jest/core/build/runJest.js:404:19) at _run10000 (node_modules/@jest/core/build/cli/index.js:320:7) at runCLI (node_modules/@jest/core/build/cli/index.js:173:3)

In the end, the fix was simple, make sure all the other Jest related packages were updated to 28.x versions. Once I did this, using a GitHub Codespace, the PR merged without a problem.

A workaround for not being able to access custom variables via stagedependencies if they are set in deployment jobs in Azure DevOps Pipelines

I have blogged in the past ( here and here) about the complexities and possible areas of confusion with different types of Azure DevOps pipeline variables. I have also seen issues raised over how to access custom variables across jobs and stages. Safe to say, this is an area where it is really easy to get it wrong and end up with a null value.

I have recently come across another edge case to add to the list of gotchas.

It seems you cannot use stagedependencies to access a variable declared in a deployment job i.e. when you are using an environment to get approval for a release.

The workaround is to add a job that is dependent on the deployment and set the custom variable within it. This variable can be accessed by a later stage as shown below

- stage: S1
  jobs:
  - deployment: D1
    strategy:
      runOnce:
        deploy:
          steps:
              - checkout: none
              - bash: echo "Can't access the variable if set in here"
  - job: J1
    dependsOn:
      D1
    steps:
      - checkout: none
      - bash: echo "##vso[task.setvariable variable=myvar;isOutput=true]True" 
        name: BashStep

- stage: S2
  condition: always()

  dependsOn: 
   - S1
  jobs:
   - job: Use_Variable
     variables: # add an alias for the var
       myvar: $[stagedependencies.S1.J1.outputs['BashStep.myvar']]
        steps:
          - checkout: none
          - dash: echo "Script gets run when myvar is true"
            condition: eq (variables['myvar'],'True')

The importance of blogging – or how to do your future self a favour

Yesterday, yet again, I was thankful for my past self taking time to blog about a technical solution I had found.

I had an error when trying to digitally sign a package. On searching on the error code I came across my own blog post with the solution. This was, as usual, one I had no recollection of writing.

I find this happens all the time. It is a little disturbing when you search for an issue and the only reference is to a post you made and have forgotten, so you are the defacto expert, nobody knows anymore on the subject, but better than having no solution.

Too often I ask people if they have documented the hints, tips and solutions they find and the response I get is ‘I will remember’. Trust me you won’t. Write something down where it is discoverable for your team and your future self. This can be any format that works for you: an Email, OneNote, a Wiki or the one I find most useful a blog. Just make sure it is easily searchable.

Your future self will thank you.

Using Azure DevOps Stage Dependency Variables with Conditional Stage and Job Execution

I have been doing some work with Azure DevOps multi-stage YAML pipelines using stage dependency variables and conditions. They can get confusing quickly, you need one syntax in one place and another elsewhere.

So, here are a few things I have learnt…

What are stage dependency variables?

Stage Dependencies are the way you define which stage follows another in a multi-stage YAML pipeline. This is as opposed to just relying on the order they appear in the YAML file, the default order. Hence, they are critical to creating complex pipelines.

Stage Dependency variables are the way you can pass variables from one stage to another. Special handling is required, as you can’t just use the ordinary output variables (which are in effect environment variables on the agent) as you might within a job as there is no guarantee the stages and jobs are running on the same agent.

For stage dependency variables, is not how you create output variables, that does not differ from the standard manner, the difference is in how you retrieve them.

In my sample, I used a BASH script to set the output variable based on a parameter passed into the pipeline, but you can create output variables using scripts or tasks

  - stage: SetupStage
    displayName: 'Setup Stage'
    jobs:
      - job: SetupJob
        displayName: 'Setup Job'
        steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing " being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo "##vso[task.setvariable variable=MyVar;isOutput=true]${{parameters.value}}"
            name: SetupStep
            displayName: 'Setup Step'

Possible ways to access a stage dependency variable

There are two basic ways to access stage dependency variables, both using array objects

stageDependencies.STAGENAME.JOBNAME.outputs['STEPNAME.VARNAME']
dependencies.STAGENAME.outputs['JOBNAME.STEPNAME.VARNAME']

Which one you use, in which place, and whether via a local alias is the complexity

How to access a stage dependency in a script?

To access a stage dependency variable in a script, or a task, there are two key requirements

  • The stage containing the consuming job and hence script/task, must be set as dependant on the stage that created the output variable
  • You have to declare a local alias for the value in the stageDependencies array within the consuming stage. This local alias will be used as the local name by scripts and tasks

Once this is configured you can access the variable like any other local YAML variable

  - stage: Show_With_Dependancy
    displayName: ‘Show Stage With dependancy’
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs[‘SetupStep.MyVar’]]
    jobs:
      - job: Job
        displayName: ‘Show Job With dependancy’
        steps:
        - bash: |
              echo “localMyVarViaStageDependancies - $(localMyVarViaStageDependancies)”

Tip: If you are having a problem with the value not being set for a stage dependency variable look in the pipeline execution log, at the job level, and check the ‘Job preparation parameters’ section to see what is being evaluated. This will show if you are using the wrong array object, or have a typo, as any incorrect declarations evaluate as null

How to use a stage dependency as a stage condition

You can use stage dependency variables as controlling conditions for running a stage. In this use-case you use the dependencies array and not the stagedependencies used when aliasing variables.

  - stage: Show_With_Dependancy_Condition
    condition: and (succeeded(), eq (dependencies.SetupStage.outputs['SetupJob.SetupStep.MyVar'], 'True'))
    displayName: 'Show Stage With dependancy Condition'

From my experiments for this use-case, you don’t seem to need the DependsOn entry to decare the stage that exposed the output variable for this to work. So, this is very useful for complex pipelines where you want to skip a later stage based on a much earlier stage for which there is no direct dependency.

A side effect of using a stage condition is that many subsequent stages have to have their execution conditions edited as you cannot rely on the default completion stage state succeeded. This is because the prior stages could now be succeeded or skipped. Hence all following stages need to use the condition

condition: and( not(failed()), not(canceled()))

How to use a stage dependency as a job condition

To avoid the need to alter all the subsequent stage’s execution conditions you can set a condition at the job or task level. Unlike setting the condition at that stage level, you have to create a local alias (see above) and check the condition on that

  - stage: Show_With_Dependancy_Condition_Job
    displayName: 'Show Stage With dependancy Condition'
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs['SetupStep.MyVar']]
    jobs:
      - job: Job
        condition: and (succeeded(),
          eq (variables.localMyVarViaStageDependancies, 'True'))
        displayName: 'Show Job With dependancy'

This technique will work for both Agent-based and Agent-Less (Server) jobs

A warning though, if your job makes use of an environment with a manual approval, the environment approval check is evaluated before the job condition. This is probably not what you are after, so if using conditions with environments that use manual approvals then the condition is probably best set at the stage level, with the knock-on issues of states of subsequent stages as mentioned above.

An alternative, if you are just using the environment for manual approval, is to look at using an AgentLess job with a manual approval. AgentLess job manual approvals are evaluated after the job condition, so do not suffer the same problem.

If you need to use a stage dependency variable in a later stage, as a job condition or script variable, but do not wish to add a direct dependency between the stages, you could consider ‘republishing’ the variable as an output of the intermedia stage(s)

  - stage: Intermediate_Stage
    dependsOn:
      - SetUpStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs['SetupStep.MyVar']]
    jobs:
      - job: RepublishMyVar
       steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing " being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo "##vso[task.setvariable variable=MyVar;isOutput=true]$( localMyVarViaStageDependancies)"
            name: RepublishStep

Summing Up

So I hope this post will help you, and the future me, navigate the complexities of stage variables

You can find the YAML for the test harness I have been using in this GitHub GIST

Setting Azure DevOps ‘All Repositories’ Policies via the CLI

The Azure DevOps CLI provides plenty of commands to update Team Projects, but it does not cover all things you might want to set. A good example is setting branch policies. For a given repo you can set the policies using the Azure Repo command eg:

az repos policy approver-count update --project <projectname> --blocking true --enabled true --branch main --repository-id <guid> --minimum-approver-count w --reset-on-source-push true  --creator-vote-counts false --allow-downvotes false    

However, you hit a problem if you wish to set the ‘All Repositories’ policies for a Team Project. The issue is that the above command requires a specific –project parameter.

I can find no way around this using any published CLI tools, but using the REST API there is an option.

You could of course check the API documentation to work out the exact call and payload. However, I usually find it quicker to perform the action I require in the Azure DevOps UI and monitor the network traffic in the browser developer tools to see what calls are made to the API.

Using this technique, I have created the following script that sets the All Repositories branch policies.

Note that you can use this same script to set a specific repo’s branch policies by setting the repositoryId in the JSON payloads.

Making SonarQube Quality Checks a required PR check on Azure DevOps

This is another of those posts to remind me in the future. I searched the documentation for this answer for ages and found nothing, eventually getting the solution by asking on the SonarQube Forum

When you link SonarQube into an Azure DevOps pipeline that is used from branch protection the success, or failure, of the PR branch analysis is shown as an optional PR Check

The question was ‘how to do I make it a required check?’. Turns out the answer is to add an extra Azure DevOps branch policey status check for the ‘SonarQube/quality gate’

When you press the + (add) button it turns out the ‘SonarQube/quality gate’ is available in the drop-down

Once this change was made, the SonarQube Quality Check becomes a required PR Check.

How I dealt with a strange problem with PSRepositories and dotnet NuGet sources

Background

We regularly re-build our Azure DevOps private agents using Packer and Lability, as I have posted about before.

Since the latest re-build, we have seen all sorts of problems. All related to pulling packages and tools from NuGet based repositories. Problems we have never seen with any previous generation of our agents.

The Issue

The issue turned out to be related to registering a private PowerShell repository.

$RegisterSplat = @{
Name = 'PrivateRepo'
SourceLocation = 'https://psgallery.mydomain.co.uk/nuget/PowerShell'
PublishLocation = 'https://psgallery.mydomain.co.uk/nuget/PowerShell'
InstallationPolicy = 'Trusted'
}

Register-PSRepository @RegisterSplat

Running this command caused the default dotnet NuGet repository to be unregistered i.e. the command dotnet nuget list source was expected to return

Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  nuget.org [Enabled]
      https://www.nuget.org/api/v2/
  3.  Microsoft Visual Studio Offline Packages [Enabled]
      C:Program Files (x86)Microsoft SDKsNuGetPackages

But it returned

Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  Microsoft Visual Studio Offline Packages [Enabled]
      C:Program Files (x86)Microsoft SDKsNuGetPackages

The Workaround

You can’t call this a solution, as I cannot see why it is really needed, but the following command does fix the problem

 dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org

Automating the creation of Team Projects in Azure DevOps

Creating a new project in Azure DevOps with your desired process template is straightforward. However, it is only the start of the job for most administrators. They will commonly want to set up other configuration settings such as branch protection rules, default pipelines etc. before giving the team access to the project. All this administration can be very time consuming and of course prone to human error.

To make this process easier, quicker and more consistent I have developed a process to automated all of this work. It uses a mixture of the following:

A sample team project that contains a Git repo containing the base code I want in my new Team Project’s default Git repo. In my case this includes

  • An empty Azure Resource Management (ARM) template
  • A .NET Core Hello World console app with an associated .NET Core Unit Test project
  • A YAML pipeline to build and test the above items, as well as generating release notes into the Team Project WIKI

A PowerShell script that uses both az devops and the Azure DevOps REST API to

  • Create a new Team Project
  • Import the sample project Git repo into the new Team Project
  • Create a WIKI in the new Team Project
  • Add a SonarQube/SonarCloud Service Endpoint
  • Update the YAML file for the pipeline to point to the newly created project resources
  • Update the branch protection rules
  • Grant access privaledges as needed for service accounts

The script is far from perfect, it could do much more, but for me, it does the core requirements I need.

You could of course enhance it as required, removing features you don’t need and adding code to do jobs such as adding any standard Work Items you require at the start of a project. Or altering the contents of the sample repo to be cloned to better match your most common project needs.

You can find the PowerShell script in AzureDevOpsPowershell GitHub repo, hope you find it useful.

Getting the approver for release to an environment within an Azure DevOps Multi-Stage YAML pipeline

I recently had the need to get the email address of the approver of a deployment to an environment from within a multi-stage YAML pipeline. Turns out it was not as easy as I might have hoped given the available documented APIs.

Background

My YAML pipeline included a manual approval to allow deployment to a given environment. Within the stage protected by the approval, I needed the approver’s details, specifically their email address.

I managed to achieve this but had to use undocumented API calls. These were discovered by looking at Azure DevOps UI operations using development tools within my browser.

The Solution

The process was as follows

  • Make a call to the build’s timeline to get the current stage’s GUID – this is documented API call
  • Make a call to the Contribution/HierarchyQuery API to get the approver details. This is the undocumented API call.

The code to do this is as shown below. It makes use of predefined variables to pass in the details of the current run and stage.

Note that I had to re-create the web client object between each API call. If I did not do this I got a 400 Bad Request on the second API call – it took me ages to figure this out!