The importance of blogging – or how to do your future self a favour

Yesterday, yet again, I was thankful for my past self taking time to blog about a technical solution I had found.

I had an error when trying to digitally sign a package. On searching on the error code I came across my own blog post with the solution. This was, as usual, one I had no recollection of writing.

I find this happens all the time. It is a little disturbing when you search for an issue and the only reference is to a post you made and have forgotten, so you are the defacto expert, nobody knows anymore on the subject, but better than having no solution.

Too often I ask people if they have documented the hints, tips and solutions they find and the response I get is ‘I will remember’. Trust me you won’t. Write something down where it is discoverable for your team and your future self. This can be any format that works for you: an Email, OneNote, a Wiki or the one I find most useful a blog. Just make sure it is easily searchable.

Your future self will thank you.

Using Azure DevOps Stage Dependency Variables with Conditional Stage and Job Execution

I have been doing some work with Azure DevOps multi-stage YAML pipelines using stage dependency variables and conditions. They can get confusing quickly, you need one syntax in one place and another elsewhere.

So, here are a few things I have learnt…

What are stage dependency variables?

Stage Dependencies are the way you define which stage follows another in a multi-stage YAML pipeline. This is as opposed to just relying on the order they appear in the YAML file, the default order. Hence, they are critical to creating complex pipelines.

Stage Dependency variables are the way you can pass variables from one stage to another. Special handling is required, as you can’t just use the ordinary output variables (which are in effect environment variables on the agent) as you might within a job as there is no guarantee the stages and jobs are running on the same agent.

For stage dependency variables, is not how you create output variables, that does not differ from the standard manner, the difference is in how you retrieve them.

In my sample, I used a BASH script to set the output variable based on a parameter passed into the pipeline, but you can create output variables using scripts or tasks

  - stage: SetupStage
    displayName: 'Setup Stage'
    jobs:
      - job: SetupJob
        displayName: 'Setup Job'
        steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing " being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo "##vso[task.setvariable variable=MyVar;isOutput=true]${{parameters.value}}"
            name: SetupStep
            displayName: 'Setup Step'

Possible ways to access a stage dependency variable

There are two basic ways to access stage dependency variables, both using array objects

stageDependencies.STAGENAME.JOBNAME.outputs['STEPNAME.VARNAME']
dependencies.STAGENAME.outputs['JOBNAME.STEPNAME.VARNAME']

Which one you use, in which place, and whether via a local alias is the complexity

How to access a stage dependency in a script?

To access a stage dependency variable in a script, or a task, there are two key requirements

  • The stage containing the consuming job and hence script/task, must be set as dependant on the stage that created the output variable
  • You have to declare a local alias for the value in the stageDependencies array within the consuming stage. This local alias will be used as the local name by scripts and tasks

Once this is configured you can access the variable like any other local YAML variable

  - stage: Show_With_Dependancy
    displayName: ‘Show Stage With dependancy’
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs[‘SetupStep.MyVar’]]
    jobs:
      - job: Job
        displayName: ‘Show Job With dependancy’
        steps:
        - bash: |
              echo “localMyVarViaStageDependancies - $(localMyVarViaStageDependancies)”

Tip: If you are having a problem with the value not being set for a stage dependency variable look in the pipeline execution log, at the job level, and check the ‘Job preparation parameters’ section to see what is being evaluated. This will show if you are using the wrong array object, or have a typo, as any incorrect declarations evaluate as null

How to use a stage dependency as a stage condition

You can use stage dependency variables as controlling conditions for running a stage. In this use-case you use the dependencies array and not the stagedependencies used when aliasing variables.

  - stage: Show_With_Dependancy_Condition
    condition: and (succeeded(), eq (dependencies.SetupStage.outputs['SetupJob.SetupStep.MyVar'], 'True'))
    displayName: 'Show Stage With dependancy Condition'

From my experiments for this use-case, you don’t seem to need the DependsOn entry to decare the stage that exposed the output variable for this to work. So, this is very useful for complex pipelines where you want to skip a later stage based on a much earlier stage for which there is no direct dependency.

A side effect of using a stage condition is that many subsequent stages have to have their execution conditions edited as you cannot rely on the default completion stage state succeeded. This is because the prior stages could now be succeeded or skipped. Hence all following stages need to use the condition

condition: and( not(failed()), not(canceled()))

How to use a stage dependency as a job condition

To avoid the need to alter all the subsequent stage’s execution conditions you can set a condition at the job or task level. Unlike setting the condition at that stage level, you have to create a local alias (see above) and check the condition on that

  - stage: Show_With_Dependancy_Condition_Job
    displayName: 'Show Stage With dependancy Condition'
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs['SetupStep.MyVar']]
    jobs:
      - job: Job
        condition: and (succeeded(),
          eq (variables.localMyVarViaStageDependancies, 'True'))
        displayName: 'Show Job With dependancy'

This technique will work for both Agent-based and Agent-Less (Server) jobs

A warning though, if your job makes use of an environment with a manual approval, the environment approval check is evaluated before the job condition. This is probably not what you are after, so if using conditions with environments that use manual approvals then the condition is probably best set at the stage level, with the knock-on issues of states of subsequent stages as mentioned above.

An alternative, if you are just using the environment for manual approval, is to look at using an AgentLess job with a manual approval. AgentLess job manual approvals are evaluated after the job condition, so do not suffer the same problem.

If you need to use a stage dependency variable in a later stage, as a job condition or script variable, but do not wish to add a direct dependency between the stages, you could consider ‘republishing’ the variable as an output of the intermedia stage(s)

  - stage: Intermediate_Stage
    dependsOn:
      - SetUpStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs['SetupStep.MyVar']]
    jobs:
      - job: RepublishMyVar
       steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing " being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo "##vso[task.setvariable variable=MyVar;isOutput=true]$( localMyVarViaStageDependancies)"
            name: RepublishStep

Summing Up

So I hope this post will help you, and the future me, navigate the complexities of stage variables

You can find the YAML for the test harness I have been using in this GitHub GIST

Setting Azure DevOps ‘All Repositories’ Policies via the CLI

The Azure DevOps CLI provides plenty of commands to update Team Projects, but it does not cover all things you might want to set. A good example is setting branch policies. For a given repo you can set the policies using the Azure Repo command eg:

az repos policy approver-count update --project <projectname> --blocking true --enabled true --branch main --repository-id <guid> --minimum-approver-count w --reset-on-source-push true  --creator-vote-counts false --allow-downvotes false    

However, you hit a problem if you wish to set the ‘All Repositories’ policies for a Team Project. The issue is that the above command requires a specific –project parameter.

I can find no way around this using any published CLI tools, but using the REST API there is an option.

You could of course check the API documentation to work out the exact call and payload. However, I usually find it quicker to perform the action I require in the Azure DevOps UI and monitor the network traffic in the browser developer tools to see what calls are made to the API.

Using this technique, I have created the following script that sets the All Repositories branch policies.

Note that you can use this same script to set a specific repo’s branch policies by setting the repositoryId in the JSON payloads.

How to fix Azure Pipeline YAML parsing errors seen after renaming the default Git branch

If in Azure DevOps you rename your Git Repo’s default branch, say from ‘master’ to ‘main’, you will probably see an error in the form ‘Encountered error(s) while parsing pipeline YAML: Could not get the latest source version for repository BlackMarble.NET.App hosted on Azure Repos using ref refs/heads/master.‘ when you try to manually queue a pipeline run.

You could well think, as I did, ‘all I need to do is update the YAML build files with a find and replace for master to main’, but this does not fix the problem.

The issue is in the part of Azure DevOps pipeline settings that are still managed by the UI and not the YAML file. The association of the Git repo and branch. To edit this setting use the following process (and yes it is well hidden)

  • In the Azure DevOps browser UI open the pipeline for editing (it shows the YAML page)
  • On the ellipsise menu ( … top right) pick Tiggers
  • Select the YAML tab (on left)
  • Then select the ‘Get Sources’ section where you can change the default branch
  • Save the changes

Hope this post saves someone some time

New features for my Azure DevOps Release Notes Extension

Over the past couple of weeks, I have shipped three user-requested features for my Azure DevOps Release Notes Extension

Generate multiple documents in a single run

You can now specify multiple templates and output files. This allows a single instance of the task to generate multiple release note documents with different formats/content.

This is useful when the generation of the dataset is slow, but you need a variety of document formats for different consumers.

To use this feature you just need to specify comma-separated template and output file names.

  - task: XplatGenerateReleaseNotes@3
    displayName: 'Release notes with multiple templates'
    inputs:
      templatefile: '$(System.DefaultWorkingDirectory)/template1.md,$(System.DefaultWorkingDirectory)/template2.md'
      outputfile: '$(System.DefaultWorkingDirectory)/out1.md, $(System.DefaultWorkingDirectory)/out2.md'
      outputVariableName: 'outputvar'
      templateLocation: 'File'

Notes

  • The number of template and output files listed must match.
  • The pipeline output variable is set to the contents generated from the first template listed

Select Work Items using a query

There is now a new parameter where you can provide the WHERE part of a WIQL query. This allows work items to be returned completely separately to the current build/release based on the query.

 - task: richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes-DEV1.XplatGenerate-Release-Notes.XplatGenerateReleaseNotes@3
    inputs:
      wiqlWhereClause: '[System.TeamProject] = "MyProject" and [System.WorkItemType] = "Product Backlog Item"'

Notes

  • You cannot use @project@currentiteration or @me variables in the WHERE clause, but @today is ok.
  • To work out the WHERE clause I recommend using the WIQL Editor extension

The results of this WIQL are available in a new independent Handlebars template array queryWorkItems. By independent I mean it is completely separate from all the other WI arrays generated from build associations.

This array can be used in the same way as the other work items arrays

# WIQL list of WI ({{queryWorkItems.length}})
{{#forEach queryWorkItems}}
   *  **{{this.id}}** {{lookup this.fields 'System.Title'}}
{{/forEach}}

Manually associated Work Items

It has recently been found that if you manually associate work items with a build, that these work items are not listed using the API calls my task previously used. Hence, they don’t appear in release notes.

If you have this form of association there is now a new option to enable them to be detected. To enable it use the new parameter checkForManuallyLinkedWI

 - task: XplatGenerateReleaseNotes@3
    displayName: 'Release notes with multiple templates'
    inputs:
      checkForManuallyLinkedWI: true

If this parameter is set to true, extra calls will be made to add these WI into the main work item array.

The case of the self-cancelling Azure DevOps pipeline

The Issue

Today I came across a strange issue with a reasonably old multi-stage YAML pipeline, it appeared to be cancelling itself.

The Build stage ran OK, but the Release stage kept being shown as cancelled with a strange error. The strangest thing was it did not happen all the time. I guess this is the reason the problem had not been picked up sooner.

If I looked at the logs for the Release stage, I saw that the main job, and meant to be the only job, had completed successfully. But I had gained an extra unexpected job that was being cancelled in 90+% of my runs.

This extra job was trying to run on an Ubuntu hosted agent and failing to make a connection. All very strange as all the jobs were meant to be using private Windows-based agents.

The Solution

Turns out, as you might expect, the issue was a typo in the YAML.

- stage: Release
  dependsOn: Build
  condition: succeeded()
  jobs:
  - job:
  - template: releasenugetpackage.yml@YAMLTemplates
    parameters:

The problem was the stray job: line. This was causing the attempt to connect to a hosted agent and then check out the code. Interesting a hosted Ubuntu agent was requested given there was no Pool defined

As soon as the extra line was removed the problems went away.

Automating adding issues to Beta GitHub Projects using GitHub Actions

The new GitHub Issues Beta is a big step forward in project management over what was previously possible with the old ‘simple’ form of Issues. The Beta adds many great features such as:

  • Project Boards/Lists
  • Actionable Tasks
  • Custom Fields including Iterations
  • Automation

However, one thing that is not available out the box is a means to automatically add newly created issues to a project.

Looking at the automations available within a project you might initially think that there is a workflow to do this job, but no.

The ‘item added to project’ workflow triggers when the issue, or PR, is added to the project not when it is created. Now, this might change when custom workflows are available in the future, but not at present.

However, all is not lost. We can use GitHub Actions to do the job. In fact, the beta documentation even gives a sample to do just this job. But, I hit a problem.

The sample shows adding PRs to a project on their creation, but it assumes you are using GitHub Enterprise, as they make use of the ‘organization’ object to find the target project.

The problem is the ‘organization’ object was not available to me as I was using a GitHub Pro account (but it would be the same for anyone using free account).

So below is a reworked sample that adds issues to a project when no organization is available. Instead, making use of the ‘user’ object which also exposes the ProjectNext method

Making SonarQube Quality Checks a required PR check on Azure DevOps

This is another of those posts to remind me in the future. I searched the documentation for this answer for ages and found nothing, eventually getting the solution by asking on the SonarQube Forum

When you link SonarQube into an Azure DevOps pipeline that is used from branch protection the success, or failure, of the PR branch analysis is shown as an optional PR Check

The question was ‘how to do I make it a required check?’. Turns out the answer is to add an extra Azure DevOps branch policey status check for the ‘SonarQube/quality gate’

When you press the + (add) button it turns out the ‘SonarQube/quality gate’ is available in the drop-down

Once this change was made, the SonarQube Quality Check becomes a required PR Check.

My cancer story – thus far

This is a somewhat different post to my usual technical ones…

In December 2017 I had major surgery. This was to remove an adrenal cortical carcinoma (ACC) that had grown on one of my adrenal glands and then up my inferior vena cava (IVC) into my heart.

Early on I decided, though not hiding the fact I was ill, to not live every detail on social media. So, it is only now that I am back to a reasonable level of health and with some distance that I feel I can write about my experiences. I hope they might give people some hope that there can be a good outcome when there is a cancer diagnosis.

I had known I was ill for a good while before I was diagnosed in May 2017. I had seen my Parkrun times slowing week on week to the point where I could not run at all, and I had also had a couple of failed blood donations due to low haemoglobin levels.

It was clear I was unwell, and getting worse, but there was no obvious root cause. All sorts of things had been considered from heart to thyroid. Cancer was suspected, but a tumour could not be found. Try as they might, my GP had failed to find a test that showed anything other than my blood numbers were not right. I was just continuing to get weaker, by that spring I was unable to walk more than a few hundred meters without getting out of breath with my heart beating at well over 170 BPM.

The problem was that ACC is a rare form of cancer and mine had presented in a hard to find way. There are two basic forms of ACC. One shuts down your adrenal system, and you notice this very quickly. The other form shows no symptoms until the tumour starts to physically impact something. This was the form I had. In my case, the tumour was increasingly blocking blood flow in my IVC and heart.

In the end, the tumour was found because of a lower abdominal ultrasound. By the time I had the ultrasound scan it was about the only diagnostic that had not been tried. It was a strange mixture of shock and relief to be immediately told after the scan by the sonographer that ‘the doctor would like a word before you go home’. So, at least I knew the cause of why I felt so ill. I left the hospital that day with a diagnosis of an adrenal tumour that was most likely benign but may be malignant, on blood thinning injections and with a whole set of appointments to find out just how bad it was.

At this point the NHS did what it does best, react to a crisis. Over the next couple of weeks, I seemed to live at the regional cancer centre at St James Hospital in Leeds having all sorts of tests.

My health, and the time I was spending at the hospital, meant there was no way I could continue to work. I was lucky I was able to transition quickly onto long term sick in such a way that meant I did not have the financial worries many cancer patients have to contend with on top of their illness. I would not be seeing work again for over 9 months.

The next phase of diagnostic tests were wide ranging. Plenty of blood was taken, I had to collect my urine for 48 hours, there were CT scans and PET Scans, all to get a clearer idea of how bad it was. The real clincher test as to whether the tumour was benign or malignant was a biopsy. One of those strangely pain free tests, due to the local anaesthetics, but accompanied by much poking, pushing and strange crunching noises. Then a 6 hour wait flat on my back on a recovery ward before I could sit up, let alone go home.

It was whilst laying down post-test I had probably my best meal on the NHS. Having just missed the lunch service on the recovery ward, a good move from past experience, a nurse produced a huge pile of toast and jam. A perfect meal for the reclined patient.

It was also during this post test recovery time that I first met other cancer patients and had a chance to have a proper chat with them. No matter how bad your case seems to be you always seem to be meeting people with a worse prognosis. Whilst on the biopsy recovery ward I met a man who told me his story. A check-up because he did not feel well led to the discovery of a large brain tumour which then spread throughout his body. He knew he only had a short time left. The conversation opened my eyes to the reality of my and other patients’ situations.

A couple of weeks later we got the bad news that the cancer was malignant and very advanced. We had clung onto the hope it was benign. The news was delivered in a very matter of fact way, that I probably would not see Christmas unless a treatment plan could be found, and the options were not good. There were tears.

However, there was at least some good news, the tumour was a single mass, it had not spread around my body. The problem was that there was no obvious surgical option due to its size and position. All that could be done was to start chemotherapy to see if the tumour could be shrunk. So, a very ‘old school’, and hence harsh, three cycle course of chemotherapy was started in July 2017.

I dealt with all of this in a very step by step way. People seemed surprised by this, that I was not more emotionally in pieces. I assume that is just my nature. I think this whole phase of my illness was much harder on my partner and family. They had to watch me getting more ill with no obvious route to recovery. For me it was just a case of get up and doing whatever the tasks were for the day. Whether they be tests, treatments or putting things in place like a Lasting Power of Attorney.

Life became a cycle of three-day eight-hour blocks of chemotherapy, then a month to try to recover. On each cycle I recovered less than the previous one.

The chemotherapy ward is strangely like flying business class. The seats look comfortable, but after eight hours they are not. You can’t go to the toilet without issues, on an airplane it is getting out of the row, on the chemotherapy ward it is taking the drip with you. In both cases, the toilet is too small. You feel tired all the time, just like jet lag, and of course, the food is questionable at best.

As I had seen on other wards, there was a strong camaraderie on the chemotherapy ward. Everyone is going through life changing treatment. Some people looked very ill, others as if there is nothing obviously wrong with them, but irrespective of their condition I found the patients, as well as the staff, very supportive. It was far from an unhappy place. Not something I had expected.

In many ways the worst side effect of chemotherapy, beyond the expected weight loss, hair loss, nausea and lack of energy was that my attention span disappeared. For the first time in my adult life I stopped reading. I struggled to make it through a single paragraph without forgetting where I was. I remember one afternoon in a hospital waiting room, whilst waiting for yet more test results, trying to read a page in a novel. I never got to the end of the page, just starting it over and over. It was also at this time I realised I had to stop driving, I felt my attention was too poor and my reactions too slow.

As I said, by this point I was very weak. This made most day-to-day activities very hard, but the strange thing was I found I could still swim. I had had the theory that though my IVC was blocked, hence not bringing blood from the lower half of my body, if I swam with a pull-buoy just using my arms, I would be OK. This turned out to be correct, much to the surprise of the medical professionals. So, I started to do some easy swimming in the recovery phases between chemotherapy cycles when I was able. It turned out the biggest issue was I got cold quickly due to my weight loss. So, swim sessions were limited to 15 to 20 minutes and just a few hundred metres.

After the planned three chemotherapy cycles all the tests were rerun and it was found that the tumour seemed unaffected. It was always a very low chance of success. I had already decided I was unlikely to start a 4th cycle as I felt so ill, it was just no life. I did not want any more chemotherapy when the chance of success was so low. Better to have some quality of life before the end.

This is where I got lucky because I was being treated at a major cancer research centre. I had been told there was no adrenal cancer surgical option for the way my ACC had presented. However, the hospital’s renal cancer surgical team had seen something similar and were willing to operate with the support of the cardiac and vascular teams. A veritable who’s who of senior surgeons at St James as I was informed by the nurse when I was being admitted for the operation in December 2017.

My operation meant stopping the heart, removing the tumour along with an adrenal gland, and a kidney (collateral damage as there was nothing wrong with it other than its proximity to the tumour) and then patching me all back together. Over 10 hours on the operating table and a transfusion of a couple of pints of blood.

When you see a very similar version of your operation on the BBC series on cutting edge surgery ‘Edge of Life’ you realise how lucky you are. Just a few years ago or living in another city and the operation would not have been possible.

Given my heart had to be stopped, I was treated as a cardiac patient, and the cardiac department moves you through recovery fast. Most of the people on the ward were having heart bypasses, so I was ‘the interestingly different’ case to many of the staff. I did take longer than the usual 5 days on the ward taken by bypass patients, but I still managed to get out of hospital in 10 days, in time for Christmas. It is surprising how fast you can get over being opened up from the top of your chest to your groin, and how little pain there was.

At this point I was in theory cured, the tumour was removed, blood was flowing again but I was very weak and recovery was going to be a long road. I started with walks of only a few minutes and then the rest of the day resting. The great news was that I could walk again without getting out of breath and my heart rate going through the roof.

So, over the next few months, I gradually regained my health, some weight, some hair and my attention span. I was able to ease back into work part time in the early summer of 2018.
However, the surgery was not the end of my treatment. The surgeons were confident they had got all the tumour they could see. They said it was well defined, so cancerous and normal tissue could be differentiated, but there was always the chance of microscopic cancerous cells remaining. So, I was put on two years of Mitotane tablet-based chemotherapy. This was the treatment with the best evidence base, but that is not saying much. There are not that many research studies into ACC treatment options as it is so rare. My treatment plan was based on a small Italian and German study of 177 people, most of which did not complete the plan, but it did show a statistically significant reduction in the chance of remission after 5 years.

Mitotane stops cell division and I had not realised how hard this would make my recovery and specifically regaining some fitness. I was OK for day to day living, but an activity like running was not possible. I twice started Couch to 5K but had to give up as I could not progress beyond the walking stages.

The mental weight of everything did not catch up with me until a good year or so after surgery, by which time I was back at work and living a ‘normal’ life. Previously people had kept asking ‘how are you doing?’. As I said, I felt they expected me to be in pieces, and I was just going step by step. It is only when the main treatment stopped and life returned to normal that everything that had occurred hit me. A seemingly unrelated fairly small in the scheme of things family incident caused it all to come flooding back and completely stopped me in my tracks.

It was that this time I reached out to the support services of the Macmillan charity and specifically the Robert Ogden Centre at St James for help. This was something I had not done prior to this time, though my partner had used their family support services earlier in my treatment. With their counselling help, I worked my way through my problems and got back to some form of normal.

In the autumn of 2019 I came off Mitotane and once it was out of my system I could at last try to get fit again. So, it was back to Couch to 5K and with a few repeated weeks I was able to run 5K again. I was back running Parkrun in November 2019. It was great to get back to my local Roundhay Parkrun community, though I had been volunteering whenever my health allowed throughout my illness. I was running much slower than before I was ill, but running.

Since then, I have to say Covid lockdown has helped me, giving me a structure to my training. I have certainly got a reasonable level of endurance back, but any speed seems to elude me.

I have always had a fairly high maximum heart rate, over 200 well into my 40s, and before getting cancer it was still in the 190s. Now, post illness, I struggle to reach 160 and my bike and run maximum heart rates are very similar. I have tried to do a maximum heart rate test, it is as if I get to a heart rate around 150-160 for a tempo run, but it barely goes any higher when I sprint. So, I have a question for anyone with experience of training after cancer and heart surgery. Is it expected after stopping the heart that my maximum heart rate should be way lower? Or is the problem my hormone levels are different due to the lack of one of my adrenal glands? Or is it just I am getting older and have just lost muscle mass? I am not sure I will ever know the answer to that one, it is not exactly a question the NHS is set up to answer. All their post-operative guidance is aimed at day-to-day levels of exertion not the elevated levels caused by sports.

But that is a minor gripe, I am reasonably fit again. I have recently completed my first triathlon in 5 years and between lockdowns walked the 268 miles of the Pennine Way with my partner. I am not as fast as I was, but I am 5 years older and have had major heart surgery. Hell, I am alive.

Like all cancer patients, this is not the end of the road for my treatment. I am still on steroids and have annual CT scans, but all the signs seem good that the surgery got the tumour and there is no reason I should not live to a ripe old age.

I would not have got here without the support of my partner and family, and the unbelievable work of the NHS and the support services I have used. I can’t thank you all enough.

Leeds Hospital Charity – the charity of Leeds Teaching Hospitals
Macmillan Cancer Support – support or cancer patients and their families
NHS Blood Transfusion Service – please consider giving blood, without regular donations surgery like mine is not possible.

But what if I can’t use GitHub Codespaces? Welcome to github.dev

Yesterday GitHub released Codespaces as a commercial offering. A new feature I have been using during its beta phase.

Codespaces provides a means for developers to easily edit GitHub hosted repos in Visual Studio Code on a high-performance VM.

No longer does the new developer on the team have to spend ages getting their local device setup ‘just right’. They can, in a couple of clicks, provision a Codespace that is preconfigured for the exact needs of the project i.e the correct VM performance, the right VS Code extensions and the debug environment configured. All billed on a pay as you go basis and accessible from any client.

It could be a game-changer for many development scenarios.

However, there is one major issue. Codespaces, at launch, are only available on GitHub Teams or Enterprise subscriptions. They are not available on Individual accounts, as yet.

But all is not lost, hidden within the documentation, but widely tweeted about is the github.dev editor. You can think of this as Codespace Lite i.e. it is completely browser-based so there is no backing VM resource.

To use this feature, alter your URL https://github.com/myname/myrepo to https://github.dev/myname/myrepo . Or when browsing the repo just press the . (period) and you swap into a browser-hosted version of VS Code.

You can install a good number of extensions, just as long as they don’t require external compute resources.

So, this is a great tool for any quick edit that requires multiple files to be touch in the same commit.

I think it is going to be interesting to see how github.dev and Codespaces are used. Maybe we will see the end of massive developer PCs?

Or will that have to wait until the Codespace VM available offer GPUs?