Don’t skimp on resources for GHES for demo instances

I wanted to have a look at some GitHub Enterprise Server (GHES) upgrade scenarios so decided to create a quick GHES install on my local test Hyper-V instance. Due to me skimping on resources, and making a typo, creating this instance was much harder than it should have been.

The first issue was I gave it a tiny data disk, this was down to me making a typo in my GB to Bytes conversion when specifying the size. Interestingly, the GHES setup does not initially complain but sits on the ‘reloading system services’ stage until it times out. If you check the /setup/config.log you see many Nomad related 500 errors. A reboot of the VM showed the real problem, the log then showed plenty of out-of-disk space messages.

reloading system devices does take a while

The easiest fix was to just start again with a data disk of a reasonable size

I next hit the problems due to my skimping on resources. I am not sure why I chose to limit them, old habits of using systems with scarce resources I guess.

I had only given the VM 10Gb of memory and 1 CPU. The Hyper-V host was not production-grade, but could certainly supply more than that.

  • The lack of at least 14Gb causes the GHES to fail to boot with a nice clear error message
  • The single CPU meant the ‘reloading application services’ step fails, the /setup/config.log shows the message
Task Group "treelights" (failed to place 1 allocation):
* Resources exhausted on 1 nodes
* Dimension "cpu" exhausted on 1 nodes

As soon as I stopped the VM and provided 14Gb of memory and multiple vCPU to the VM instance and rebooted the setup completed as expected.

So the top tip is to read the GHES systems requirements and actually follow them, even if it is just a test/demo instance.

Fix for can’t add second outlook.com account to Outlook Desktop – requires non-existent PIN

I was recently trying to set up a copy of the Outlook desktop app with two outlook.com email accounts and hit a strange issue.

I was able to add the first one without any problems, but when I tried to add the second one I got an error message asking for PIN, and most importantly on the same dialog showing the second outlook.com email address.

This is confusing as I know of no way to set a PIN on an outlook.com account.

Turns out it is just a badly designed dialog. It wants the PIN of the currently logged-in user on the Windows PC (made worse in my case as the PIN was not what I thought it was and had to be reset)

Once I had a working PIN for the local user login and entered it when prompted all was OK

More examples of using custom variables in Azure DevOps multi-stage YML

I have blogged in the past ( here , here and here) about the complexities and possible areas of confusion with different types of Azure DevOps pipeline variables.

Well here is another example of how to use variables and what can trip you up.

The key in this example is the scope of a variable, whether it is available outside a job and the syntax to access it

Variables local to the Job

So, if you create your variable as shown below

write-host "##vso[task.setvariable variable=standardvar]$MyPowerShellVar"

It is only available in the current job in the form $(standardvar)

Variable with a wider scope

If you want it to be available in another job, or stage you have to declare it thus, adding ;isOutput=true

write-host "##vso[task.setvariable variable=stagevar;isOutput=true]$MyPowerShellVar"

But there is also a change in how you access it.

  • You need to give the script that declares the variable a name so it can be referenced
  • You need to add dependons associations between stages/jobs
  • And the syntax used to access the variable changes depending on whether you are in the same job, same stage but a different job or a completely different stage.

Below is a fully worked example

Updating the Azure Application client_secret used by Packer

As I have posted about previously, we create our Azure DevOps build agent images using the same Packer definitions as used by Microsoft. This time when I ran my Packer command to build an updated VHD I got the error

Build ‘vhd’ errored after 135 milliseconds 708 microseconds: adal: Refresh request failed. Status Code = ‘401’. Response body: {“error”:”invalid_client”,”error_description”:”AADSTS7000222: The provided client secret keys for app ‘6425416f-aa94-4c20-8395-XXXXXXX’ are expired. Visit the Azure portal to create new keys for your app: https://aka.ms/NewClientSecret, or consider using certificate credentials for added security: https://aka.ms/certCreds.rnTrace ID: 65a200cf-8423-4d52-af07-67bf26225200rnCorrelation ID: 0f86de87-33fa-443b-8186-4de3894972e1rnTimestamp: 2022-05-03 08:36:50Z”,”error_codes”:[7000222],”timestamp”:”2022-05-03 08:36:50Z”,”trace_id”:”65a200cf-8423-4d52-af07-67bf26225200″,”correlation_id”:”0f86de87-33fa-443b-8186-4de3894972e1″,”error_uri”:”https://login.microsoftonline.com/error?code=7000222″} Endpoint https://login.microsoftonline.com/545a7a95-3c4d-4e88-9890-baa86d5fdacb/oauth2/token==> Builds finished but no artifacts were created.

As the error message made clear, the client_secret had expired.

This value was originally set/generated when the Azure Service Prinicple was created. However, as I don’t want a new SP, this time I just wanted to update the secret via the Azure Portal (Home > AAD > Application Registration > [My Packer App].

The overview showed the old Secret had expired and I was able to create a new one on the Certificates and Secrets tab. However, when I update my Packer configuration file and re-ran the command it still failed.

It only worked after I deleted the expired secret. I am not sure if this is a requirement ( it is not something I have seen before) or just some propagation/cache delay.

But worth a blog post as a reminder to my future self and any other with a similar issue.

How to fix Azure Pipeline YAML parsing errors seen after renaming the default Git branch

If in Azure DevOps you rename your Git Repo’s default branch, say from ‘master’ to ‘main’, you will probably see an error in the form ‘Encountered error(s) while parsing pipeline YAML: Could not get the latest source version for repository BlackMarble.NET.App hosted on Azure Repos using ref refs/heads/master.‘ when you try to manually queue a pipeline run.

You could well think, as I did, ‘all I need to do is update the YAML build files with a find and replace for master to main’, but this does not fix the problem.

The issue is in the part of Azure DevOps pipeline settings that are still managed by the UI and not the YAML file. The association of the Git repo and branch. To edit this setting use the following process (and yes it is well hidden)

  • In the Azure DevOps browser UI open the pipeline for editing (it shows the YAML page)
  • On the ellipsise menu ( … top right) pick Tiggers
  • Select the YAML tab (on left)
  • Then select the ‘Get Sources’ section where you can change the default branch
  • Save the changes

Hope this post saves someone some time

Tidying up local branches with a Git Alias and a PowerShell Script

It is easy to get your local branches in Git out of sync with the upstream repository, leaving old dead branches locally that you can’t remember creating. You can use the prune option on your Git Fetch command to remove the remote branch references, but that command does nothing to remove local branches.

A good while ago, I wrote a small PowerShell script to wrapper the running of the Git Fetch and then based on the deletions remove any matching local branches. Then finally returning me to my trunk branch.

Note: This script was based on some sample I found, but I can’t remember where to give credit, sorry.

I used to just run this command from the command line, but I recently thought it would be easier if it became a Git Alias. As Git Aliases run a bash shell, this meant I needed to shell out to PowerShell 7. Hence, my Git Config ended up being as shown below

[user]
        name = Richard Fennell
        email = richard@blackmarble.co.uk
[filter "lfs"]
        required = true
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
[init]
        defaultBranch = main
[alias]
        tidy = !pwsh.exe C:/Users/fez/OneDrive/Tools/Remove-DeletedGitBranches.ps1 -force

I can just run 'git tidy‘ and all my branches get sorted out.

Business Process Automation and Integration in the Cloud

Organisations are facing increased, and unprecedented, pressure from the market to transform digitally, and as a result, they need to think how they can become more attractive in their specific market space. The most common area to concentrate on is becoming more efficient, and this can be thought of in discrete parts: streamlining internal, often complex, business processes, making the organisation easier to work with for suppliers or customers, having better cross business views on information, improving service delivery, or saving on manual processing. All these efficiencies are brought to bear by automation, whether this is person to system data sharing, system to system, business to business; the same principles apply but different techniques may be used.

Today most organisations run their business on systems that run across multiple heterogeneous environments. This poses interoperability issues and other such difficulties. To achieve this, businesses need to look to integrating the data solutions to provide easy access to standard data across internal and external systems, layering on top of business process automation.

Business process automation is now an entity will define business processes, and automate them, so they can be run repeatedly and reliably, at a low cost, with a reduction in human error.