How to fix Azure Pipeline YAML parsing errors seen after renaming the default Git branch

If in Azure DevOps you rename your Git Repo’s default branch, say from ‘master’ to ‘main’, you will probably see an error in the form ‘Encountered error(s) while parsing pipeline YAML: Could not get the latest source version for repository BlackMarble.NET.App hosted on Azure Repos using ref refs/heads/master.‘ when you try to manually queue a pipeline run.

You could well think, as I did, ‘all I need to do is update the YAML build files with a find and replace for master to main’, but this does not fix the problem.

The issue is in the part of Azure DevOps pipeline settings that are still managed by the UI and not the YAML file. The association of the Git repo and branch. To edit this setting use the following process (and yes it is well hidden)

  • In the Azure DevOps browser UI open the pipeline for editing (it shows the YAML page)
  • On the ellipsise menu ( … top right) pick Tiggers
  • Select the YAML tab (on left)
  • Then select the ‘Get Sources’ section where you can change the default branch
  • Save the changes

Hope this post saves someone some time

Tidying up local branches with a Git Alias and a PowerShell Script

It is easy to get your local branches in Git out of sync with the upstream repository, leaving old dead branches locally that you can’t remember creating. You can use the prune option on your Git Fetch command to remove the remote branch references, but that command does nothing to remove local branches.

A good while ago, I wrote a small PowerShell script to wrapper the running of the Git Fetch and then based on the deletions remove any matching local branches. Then finally returning me to my trunk branch.

Note: This script was based on some sample I found, but I can’t remember where to give credit, sorry.

I used to just run this command from the command line, but I recently thought it would be easier if it became a Git Alias. As Git Aliases run a bash shell, this meant I needed to shell out to PowerShell 7. Hence, my Git Config ended up being as shown below

        name = Richard Fennell
        email =
[filter "lfs"]
        required = true
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
        defaultBranch = main
        tidy = !pwsh.exe C:/Users/fez/OneDrive/Tools/Remove-DeletedGitBranches.ps1 -force

I can just run 'git tidy‘ and all my branches get sorted out.

Business Process Automation and Integration in the Cloud

Organisations are facing increased, and unprecedented, pressure from the market to transform digitally, and as a result, they need to think how they can become more attractive in their specific market space. The most common area to concentrate on is becoming more efficient, and this can be thought of in discrete parts: streamlining internal, often complex, business processes, making the organisation easier to work with for suppliers or customers, having better cross business views on information, improving service delivery, or saving on manual processing. All these efficiencies are brought to bear by automation, whether this is person to system data sharing, system to system, business to business; the same principles apply but different techniques may be used.

Today most organisations run their business on systems that run across multiple heterogeneous environments. This poses interoperability issues and other such difficulties. To achieve this, businesses need to look to integrating the data solutions to provide easy access to standard data across internal and external systems, layering on top of business process automation.

Business process automation is now an entity will define business processes, and automate them, so they can be run repeatedly and reliably, at a low cost, with a reduction in human error.

Sponsoring DDD 2020

A few weeks back, I wrote about how we aren’t asking for sponsors for our online Developer Day, as there aren’t any significant costs to cover. Instead, we were directing people towards making a donation to The National Museum of Computing, an organisation which does great things for our industry, but has been finding this year challenging. And many thanks to those of you who have already donated.

However, some organisations, who usually sponsor DDD, have been in touch about still sponsoring DDD in some way, as they want to show their support.

Therefore, if an organisation wishes to donate to TNMOC using the link above, we will still count that as sponsorship, and we are delighted for their ongoing support. Please contact for more details.

Thank you again for your support, and hope you can join us at DDD on the 12th December!

DDD Logo
DDD Logo
The National Museum of Computing

How can I automatically create Azure DevOps Release Notes and how can I publish them?

A question I am often asked when consulting on Azure DevOps is ‘how can I automatically create release notes and how can I publish them?’.

Well it is for just this requirement that I have written a set of Azure DevOps Pipeline Tasks

  • Release Note Generator – to generate release notes. I strongly recommend this Cross-platform Node-based version. I plan to deprecate my older PowerShell version in the not too distant future as it uses ‘homegrown logic’, as opposed to standard Azure DevOps API calls, to get associated items.
  • Wiki Updater – to upload a page tot a WIKI.
  • WIKI PDF Generator – to convert a generated page, or whole WIKI, to PDF format.

So lets deal with these tools in turn

Generating Release Notes

The Release Note task generates release notes by getting the items associated with a build (or release) from the Azure DevOps API and generating a document based on a Handlebars based template.

  • The artefacts that can be included in the release notes are details of the build/release and associated Work Items, Commits/Changesets, Tests and Pull Requests.
  • Most of the sample templates provided are for markdown format files. However, they could easily be converted for other text-based formats such as HTML if needed.
  • The use of Handlebars are the templating language makes for a very flexible and easily extensible means of document generation. There are sample of custom extensions provided with the templates

Sample YAML for this task is as follows, not it is using an inline template but it is possible to also load the template from a file path

        - task: richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes.XplatGenerate-Release-Notes.XplatGenerateReleaseNotes@3
          displayName: 'Generate Release Notes'
            outputfile: '$(System.DefaultWorkingDirectory)'
            outputVariableName: OutputText
            templateLocation: InLine
            inlinetemplate: |
              # Notes for build 
              **Build Number**: {{}}
              **Build Trigger PR Number**: {{lookup buildDetails.triggerInfo 'pr.number'}} 

              # Associated Pull Requests ({{pullRequests.length}})
              {{#forEach pullRequests}}
              {{#if isFirst}}### Associated Pull Requests (only shown if  PR) {{/if}}
              *  **PR {{}}**  {{this.title}}

              # Builds with associated WI/CS ({{builds.length}})
              {{#forEach builds}}
              {{#if isFirst}}## Builds {{/if}}
              ##  Build {{}}
              {{#forEach this.commits}}
              {{#if isFirst}}### Commits {{/if}}
              - CS {{}}
              {{#forEach this.workitems}}
              {{#if isFirst}}### Workitems {{/if}}
              - WI {{}}

              # Global list of WI ({{workItems.length}})
              {{#forEach workItems}}
              {{#if isFirst}}## Associated Work Items (only shown if  WI) {{/if}}
              *  **{{}}**  {{lookup this.fields 'System.Title'}}
                - **WIT** {{lookup this.fields 'System.WorkItemType'}} 
                - **Tags** {{lookup this.fields 'System.Tags'}}

              {{#forEach commits}}
              {{#if isFirst}}### Associated commits{{/if}}
              * ** ID{{}}** 
                -  **Message:** {{this.message}}
                -  **Commited by:** {{}} 
                -  **FileCount:** {{this.changes.length}} 
              {{#forEach this.changes}}
                    -  **File path (TFVC or TfsGit):** {{this.item.path}}  
                    -  **File filename (GitHub):** {{this.filename}}  

How to Publish The Notes

Once the document has been generated there is a need for a decision as to how to publish it. TThere are a few options

  • Attach the markdown file as an artefact to the Build or Pipeline. Note you can’t do this with a UI based Releases as they have no concept of artefacts, but this is becoming less of a concern as people move to multistage YAML.
  • Save in some other location e.g Azure Storage or if on-premises a UNC file share
  • Send the document as an email – I have used Rene van Osnabrugge Send Email Task for this job.
  • Upload it to a WIKI using my WIKI Updater Task
  • Convert the markdown release note document, or the whole WIKI, to a PDF and use any of the above options using first my WIKI PDF Exporter Task then another task.

I personally favour the 1st and 4th options used together. Attachment to the pipeline and then upload the document to a WIKI

A sample of suitable YAML is shown below, uploading the document to an Azure DevOps WIKI. Please note that the repo URL and authentication can trip you up here so have a good read of the provided documentation before you use this task.

  - task: richardfennellBM.BM-VSTS-WIKIUpdater-Tasks.WikiUpdaterTask.WikiUpdaterTask@1
          displayName: 'Git based WIKI Updater'
            repo: ''
            filename: 'xPlatReleaseNotes/'
            dataIsFile: true
            sourceFile: '$(System.DefaultWorkingDirectory)'
            message: 'Update from Build'
            gitname: builduser
            gitemail: 'build@demo'
            useAgentToken: true

But when do I generate the release notes?

I would suggest you always generate release notes every build/pipeline i.e. a document of the changes since the last successful build/pipeline of that build definition. This should be attached as an artefact.

However, this per build document will usually too granular for use as ‘true’ release notes i.e. something to hand to a QA team, auditor or client.

To address this second use case I suggest, within a multistage YAML pipeline (or a UI based release), having a stage specifically for generating release notes.

My task has a feature that it will check for the last successful release of a pipeline/release to the stage it is defined in, so will base the release note on the last successful release to that given stage. If this ‘documentation’ stage is only run when you are doing a ‘formal’ release, the release note generated will be since the last formal release. Exactly what a QA team or auditor or client might want.

In conclusion

So I hope that this post provides some ideas as to how you can use my tasks generate some useful release notes.

Some Points with working with SQL Always On Availability Groups (when doing a TFS Migration)

Just shy of a full year from my last post. Crikey Smile

So it’s another TFS related post, but not specifically to do with TFS. I still perform TFS migrations/upgrades from time to time here at Black Marble as we still have many customers who can’t migrate to Azure DevOps for one reason or another.

A few weeks ago we performed a TFS upgrade for a customer, bringing them from TFS 2013.1 to Azure DevOps Server 2019.0.1.

The new SQL Server installation that underpinned this new environment was a SQL Server 2016 Always On Availability Group, with 2 synchronous nodes and one async node contained in a figurative shed somewhere a few miles away.

Now our initial migration/upgrade to this server had gone well, there a dozen or so smaller collections we moved over with the initial migration but due to deadlines/delivery/active sprints etc there were a couple of teams who’s collections couldn’t be moved in that initial setup. So we came back a few months later to move these last few collections which coincidentally happened to be the biggest.

The biggest of the collections was approximately ~160gb in size, which wasn’t the biggest collection I’d seen, (not by a looooong shot) but not small by any means.

Could we get this thing into the Availability Group? Nope. Every time we completed the “Promote Database” Wizard the database would fail to appear in the “synchronized” or even “synchronizing” state on any of the nodes. Inspection of the file system on each of the secondary nodes didn’t even show the database files as being present.

So we had a think about it (rinsed repeated a few times) and someone called their SQL DBA friend who told us the GUI based wizard hates anything over ~150gb. We should let the wizard generate the script for us, and run it ourselves.

Well lo and behold the promotion of the databases worked…mostly. We saw the files appear on disk on all servers in the group, and they appeared as databases in the availability group but with warnings on them that they still weren’t syncing.

So on a hunch I re-ran the GUI wizard to promote the databases again, which among other things performs a series of validation checks. The key validation check (and the backbone of my hunch) is “is this database already in an AG?”. The answer was yes, and this seemed to shock SSMS into recognizing that the job was complete and the synchronization status of the synchronous and asynchronous replicas jumped into life.

My guess is that promoting a db into an AG is a bit of a brittle process, and if some thread in the background dies or the listener object in memory waiting for that thread dies then SSMS never knows what the state of the job is. Doing it via script is more resilient, but still not bullet proof.

Also worth noting for anyone who isn’t a SQL boffin, an async replica will never show a database as “synchronized” only ever “synchronizing”. Makes sense when you think about it! (Don’t let your customers get hung up on it).

Logic App Flat File Schemas and BizTalk Flat File Schemas

I recently started working on a new Logic App for a customer using the Enterprise Integration Pack for Visual Studio 2015, I was greeted with a familiar sight, the schema generation tools from BizTalk, but with a new lick of paint Open-mouthed smile

The Logic App requires the use of Flat File schemas, so I knocked up a schema from the  instance I’d been provided and went to validate it against the instance used to generate it (since it should validate).

My Flat File was a bit of a pain to be frank in that it had ragged endings, that is to say some sample rows might look a bit like:




Which I’ve worked with before….but couldn’t quite remember how I solved exactly other than tinkering with the element properties.

I generated the schema against the lines with the additional column and erroneously set the last field property Nillable to True. When I went to validate the instance lo’ and behold it wasn’t a valid instance and I had little information about why.

So I fired up my BizTalk 2013 R2 virtual machine (I could have used my 2016 one to be fair if I hadn’t sent it to the farm with old yellow last week) and rinsed and repeated the Flat File Schema Wizard.

So I got a bit more information this time, namely that the sample I’d been provided was missing a CR/LF on the final line and that the Nillable I’d set on the last column was throwing a wobbler by messing up the following lines.

Setting the field’s  Nillable property back to false, but it’s Min and Max occurs to 0 and 1 respectively and I had a valid working schema.

So I copied the schema back to my Logic Apps VM and attempted to revalidate my file (with its final line CR/LF amended). To my annoyance, invalid instance!

I was boggled at this quite frankly but some poking around the internet led me to this fellow’s Blog.

In short there’s an attribute added on a Flat File schema which denotes the extension class to be used by the schema editor, when built by a BizTalk Flat File Schema Wizard it’s set to


When generated by the Enterprise Integration Pack it’s


Changing this attribute value in my BizTalk generated Flat File schema and presto, the schema could validate the instance it was generated from.

So in short I’ll say the flavour of the Schema designer tools in the Enterprise Integration Pack seem to throw out errors and verbosity a little different to its BizTalk ancestor, it still throws out mostly the same information, but in different places:

  • EIP

You only get a generic error message in the output log. Go and examine the errors log for more information.


  • BizTalk

You get the errors in the output log, and in the error log.


In my case the combination of my two errors (the flat file being malformed slightly, and my Nillable field change) in the EIP only gave me an error “Root element is missing” which wasn’t particularly helpful and the BizTalk tooling did give me a better fault diagnosis.

On the bright side the two are more or less interchangeable. Something to bear in mind if you’re struggling with a Flat Schema and have a BizTalk development environment on hand.

SharePoint 2010 Integrations with TFS 2017.3

An interesting aspect of a TFS Upgrade I found myself in recently was to do with SharePoint Integrations with Team Foundation Server. As a bit of a pre-amble, back in the day, when TFS was a bit more primitive, it didn’t have a rich dashboard experience like it does today, and so leaned on it’s distant cousin SharePoint for Project Portal support. A lot of my customers as a consequence of this have significant document repositories in SharePoint Foundation instances installed as part of their Team Foundation Server installations. This in itself isn’t an issue, as you can decouple a SharePoint server from TFS and have the two operate independently of one another, but some users will miss the “Navigate to Project Portal” button from their project dashboard.

I found an easy answer to this is to use the Mark Down widget to give them the link they want/need so they can still access their project documentation, but of course the work item and report add-ins will cease to function. Encouraging users to adopt the newer TFS/Azure Dev Ops Dashboards is key here.

But enough waffling. A technical conundrum faced in this recent upgrade I thought worth mentioning was around the SharePoint integrations themselves. The customer was migrating from a 2012.4 server to a 2017.3 server (as tempting as 2018.3 was, IT governance in their organization meant SQL 2016 onwards was a no no). Their old TFS server was cohabiting with a SharePoint Foundation 2010 server which was their BA’s document repository and point of entry for having a look at the state of play for each Team Project. The server had two Team Project Collections for two different teams who were getting two shiny new TFS Servers.

One team was going to the brand new world and still needed the SharePoint integrations. The other was going to the new world with no SharePoint (huzzah!). Initially the customer had wanted to move the SharePoint server to one of the new servers, and thus be able to decommission the old Windows Server 2008R2 server it was living on.

This proved to be problematic in that the new server was on Windows Server 2016, which even had we upgraded SharePoint to the last version which supported TFS integration which was SharePoint 2013, 2013 is too old and creaky to run on Windows Server 2016. So we had to leave the SharePoint server where it was. This then presented a new problem. According to the official documentation, to point the SharePoint 2010 server at TFS 2017 I would have had to install the 2010 Connectivity Extensions which shipped with the version of TFS I was using. So that would be TFS 2017’s SharePoint 2010 extensions. But we already had a set of SharePoint 2010 extensions installed on that server….specifically TFS 2012’s SharePoint 2010 extensions.

To install 2017’s version of the extensions I’d have had to uninstall or upgrade TFS 2012.4 since two versions of TFS don’t really like cohabiting on the same server and the newer version tends to try (helpfully I should stress) upgrading the older version. The customer didn’t like the idea of their fall-back/mothballed server being rendered unusable if they needed to spin it up again. This left me in something of a pickle between a rock and a hard place.

So on a hunch I figured why not go off the beaten path and ignore the official documentation (something I don’t make a habit of), and suggested to the customer “let’s just point the TFS 2012’s SharePoint 2010 extensions at TFS 2017 and see what happens!”. So, I reconfigured the SharePoint 2010 Extensions the TFS 2017 Admin console and guess what? Worked like a charm. The SharePoint 2010 Project Portals quite happily talked to the TFS 2017 Server with no hiccups and no loss of functionality.

Were I to make an educated guess, I’d say the TFS 2017, SharePoint 2010 extensions are the same as TFS 2012’s but with a new sticker on the front, or only minor changes.