How can I automatically create Azure DevOps Release Notes and how can I publish them?

A question I am often asked when consulting on Azure DevOps is ‘how can I automatically create release notes and how can I publish them?’.

Well it is for just this requirement that I have written a set of Azure DevOps Pipeline Tasks

  • Release Note Generator – to generate release notes. I strongly recommend this Cross-platform Node-based version. I plan to deprecate my older PowerShell version in the not too distant future as it uses ‘homegrown logic’, as opposed to standard Azure DevOps API calls, to get associated items.
  • Wiki Updater – to upload a page tot a WIKI.
  • WIKI PDF Generator – to convert a generated page, or whole WIKI, to PDF format.

So lets deal with these tools in turn

Generating Release Notes

The Release Note task generates release notes by getting the items associated with a build (or release) from the Azure DevOps API and generating a document based on a Handlebars based template.

  • The artefacts that can be included in the release notes are details of the build/release and associated Work Items, Commits/Changesets, Tests and Pull Requests.
  • Most of the sample templates provided are for markdown format files. However, they could easily be converted for other text-based formats such as HTML if needed.
  • The use of Handlebars are the templating language makes for a very flexible and easily extensible means of document generation. There are sample of custom extensions provided with the templates

Sample YAML for this task is as follows, not it is using an inline template but it is possible to also load the template from a file path

        - task: richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes.XplatGenerate-Release-Notes.XplatGenerateReleaseNotes@3
          displayName: 'Generate Release Notes'
          inputs:
            outputfile: '$(System.DefaultWorkingDirectory)inline.md'
            outputVariableName: OutputText
            templateLocation: InLine
            inlinetemplate: |
              # Notes for build 
              **Build Number**: {{buildDetails.id}}
              **Build Trigger PR Number**: {{lookup buildDetails.triggerInfo 'pr.number'}} 

              # Associated Pull Requests ({{pullRequests.length}})
              {{#forEach pullRequests}}
              {{#if isFirst}}### Associated Pull Requests (only shown if  PR) {{/if}}
              *  **PR {{this.id}}**  {{this.title}}
              {{/forEach}}

              # Builds with associated WI/CS ({{builds.length}})
              {{#forEach builds}}
              {{#if isFirst}}## Builds {{/if}}
              ##  Build {{this.build.buildNumber}}
              {{#forEach this.commits}}
              {{#if isFirst}}### Commits {{/if}}
              - CS {{this.id}}
              {{/forEach}}
              {{#forEach this.workitems}}
              {{#if isFirst}}### Workitems {{/if}}
              - WI {{this.id}}
              {{/forEach}} 
              {{/forEach}}

              # Global list of WI ({{workItems.length}})
              {{#forEach workItems}}
              {{#if isFirst}}## Associated Work Items (only shown if  WI) {{/if}}
              *  **{{this.id}}**  {{lookup this.fields 'System.Title'}}
                - **WIT** {{lookup this.fields 'System.WorkItemType'}} 
                - **Tags** {{lookup this.fields 'System.Tags'}}
              {{/forEach}}

              {{#forEach commits}}
              {{#if isFirst}}### Associated commits{{/if}}
              * ** ID{{this.id}}** 
                -  **Message:** {{this.message}}
                -  **Commited by:** {{this.author.displayName}} 
                -  **FileCount:** {{this.changes.length}} 
              {{#forEach this.changes}}
                    -  **File path (TFVC or TfsGit):** {{this.item.path}}  
                    -  **File filename (GitHub):** {{this.filename}}  
              {{/forEach}}
              {{/forEach}}

How to Publish The Notes

Once the document has been generated there is a need for a decision as to how to publish it. TThere are a few options

  • Attach the markdown file as an artefact to the Build or Pipeline. Note you can’t do this with a UI based Releases as they have no concept of artefacts, but this is becoming less of a concern as people move to multistage YAML.
  • Save in some other location e.g Azure Storage or if on-premises a UNC file share
  • Send the document as an email – I have used Rene van Osnabrugge Send Email Task for this job.
  • Upload it to a WIKI using my WIKI Updater Task
  • Convert the markdown release note document, or the whole WIKI, to a PDF and use any of the above options using first my WIKI PDF Exporter Task then another task.

I personally favour the 1st and 4th options used together. Attachment to the pipeline and then upload the document to a WIKI

A sample of suitable YAML is shown below, uploading the document to an Azure DevOps WIKI. Please note that the repo URL and authentication can trip you up here so have a good read of the provided documentation before you use this task.

  - task: richardfennellBM.BM-VSTS-WIKIUpdater-Tasks.WikiUpdaterTask.WikiUpdaterTask@1
          displayName: 'Git based WIKI Updater'
          inputs:
            repo: 'dev.azure.com/richardfennell/Git%20project/_git/Git-project.wiki'
            filename: 'xPlatReleaseNotes/build-Windows-handlebars.md'
            dataIsFile: true
            sourceFile: '$(System.DefaultWorkingDirectory)inline.md'
            message: 'Update from Build'
            gitname: builduser
            gitemail: 'build@demo'
            useAgentToken: true

But when do I generate the release notes?

I would suggest you always generate release notes every build/pipeline i.e. a document of the changes since the last successful build/pipeline of that build definition. This should be attached as an artefact.

However, this per build document will usually too granular for use as ‘true’ release notes i.e. something to hand to a QA team, auditor or client.

To address this second use case I suggest, within a multistage YAML pipeline (or a UI based release), having a stage specifically for generating release notes.

My task has a feature that it will check for the last successful release of a pipeline/release to the stage it is defined in, so will base the release note on the last successful release to that given stage. If this ‘documentation’ stage is only run when you are doing a ‘formal’ release, the release note generated will be since the last formal release. Exactly what a QA team or auditor or client might want.

In conclusion

So I hope that this post provides some ideas as to how you can use my tasks generate some useful release notes.

Some Points with working with SQL Always On Availability Groups (when doing a TFS Migration)

Just shy of a full year from my last post. Crikey Smile

So it’s another TFS related post, but not specifically to do with TFS. I still perform TFS migrations/upgrades from time to time here at Black Marble as we still have many customers who can’t migrate to Azure DevOps for one reason or another.

A few weeks ago we performed a TFS upgrade for a customer, bringing them from TFS 2013.1 to Azure DevOps Server 2019.0.1.

The new SQL Server installation that underpinned this new environment was a SQL Server 2016 Always On Availability Group, with 2 synchronous nodes and one async node contained in a figurative shed somewhere a few miles away.

Now our initial migration/upgrade to this server had gone well, there a dozen or so smaller collections we moved over with the initial migration but due to deadlines/delivery/active sprints etc there were a couple of teams who’s collections couldn’t be moved in that initial setup. So we came back a few months later to move these last few collections which coincidentally happened to be the biggest.

The biggest of the collections was approximately ~160gb in size, which wasn’t the biggest collection I’d seen, (not by a looooong shot) but not small by any means.

Could we get this thing into the Availability Group? Nope. Every time we completed the “Promote Database” Wizard the database would fail to appear in the “synchronized” or even “synchronizing” state on any of the nodes. Inspection of the file system on each of the secondary nodes didn’t even show the database files as being present.

So we had a think about it (rinsed repeated a few times) and someone called their SQL DBA friend who told us the GUI based wizard hates anything over ~150gb. We should let the wizard generate the script for us, and run it ourselves.

Well lo and behold the promotion of the databases worked…mostly. We saw the files appear on disk on all servers in the group, and they appeared as databases in the availability group but with warnings on them that they still weren’t syncing.

So on a hunch I re-ran the GUI wizard to promote the databases again, which among other things performs a series of validation checks. The key validation check (and the backbone of my hunch) is “is this database already in an AG?”. The answer was yes, and this seemed to shock SSMS into recognizing that the job was complete and the synchronization status of the synchronous and asynchronous replicas jumped into life.

My guess is that promoting a db into an AG is a bit of a brittle process, and if some thread in the background dies or the listener object in memory waiting for that thread dies then SSMS never knows what the state of the job is. Doing it via script is more resilient, but still not bullet proof.

Also worth noting for anyone who isn’t a SQL boffin, an async replica will never show a database as “synchronized” only ever “synchronizing”. Makes sense when you think about it! (Don’t let your customers get hung up on it).

Logic App Flat File Schemas and BizTalk Flat File Schemas

I recently started working on a new Logic App for a customer using the Enterprise Integration Pack for Visual Studio 2015, I was greeted with a familiar sight, the schema generation tools from BizTalk, but with a new lick of paint Open-mouthed smile

The Logic App requires the use of Flat File schemas, so I knocked up a schema from the  instance I’d been provided and went to validate it against the instance used to generate it (since it should validate).

My Flat File was a bit of a pain to be frank in that it had ragged endings, that is to say some sample rows might look a bit like:

1,2,3,4,5,6

1,2,3,4,5

1,2,3,4,5,6

Which I’ve worked with before….but couldn’t quite remember how I solved exactly other than tinkering with the element properties.

I generated the schema against the lines with the additional column and erroneously set the last field property Nillable to True. When I went to validate the instance lo’ and behold it wasn’t a valid instance and I had little information about why.

So I fired up my BizTalk 2013 R2 virtual machine (I could have used my 2016 one to be fair if I hadn’t sent it to the farm with old yellow last week) and rinsed and repeated the Flat File Schema Wizard.

So I got a bit more information this time, namely that the sample I’d been provided was missing a CR/LF on the final line and that the Nillable I’d set on the last column was throwing a wobbler by messing up the following lines.

Setting the field’s  Nillable property back to false, but it’s Min and Max occurs to 0 and 1 respectively and I had a valid working schema.

So I copied the schema back to my Logic Apps VM and attempted to revalidate my file (with its final line CR/LF amended). To my annoyance, invalid instance!

I was boggled at this quite frankly but some poking around the internet led me to this fellow’s Blog.

https://blogs.msdn.microsoft.com/david_burgs_blog/2018/03/26/generate-and-validate-flat-file-native-instances-from-flat-file-schemas/

In short there’s an attribute added on a Flat File schema which denotes the extension class to be used by the schema editor, when built by a BizTalk Flat File Schema Wizard it’s set to

Microsoft.BizTalk.FlatFileExtension.FlatFileExtension

When generated by the Enterprise Integration Pack it’s

Microsoft.Azure.Integration.DesignTools.FlatFileExtension.FlatFileExtension

Changing this attribute value in my BizTalk generated Flat File schema and presto, the schema could validate the instance it was generated from.

So in short I’ll say the flavour of the Schema designer tools in the Enterprise Integration Pack seem to throw out errors and verbosity a little different to its BizTalk ancestor, it still throws out mostly the same information, but in different places:

  • EIP

You only get a generic error message in the output log. Go and examine the errors log for more information.

image

  • BizTalk

You get the errors in the output log, and in the error log.

image

In my case the combination of my two errors (the flat file being malformed slightly, and my Nillable field change) in the EIP only gave me an error “Root element is missing” which wasn’t particularly helpful and the BizTalk tooling did give me a better fault diagnosis.

On the bright side the two are more or less interchangeable. Something to bear in mind if you’re struggling with a Flat Schema and have a BizTalk development environment on hand.

SharePoint 2010 Integrations with TFS 2017.3

An interesting aspect of a TFS Upgrade I found myself in recently was to do with SharePoint Integrations with Team Foundation Server. As a bit of a pre-amble, back in the day, when TFS was a bit more primitive, it didn’t have a rich dashboard experience like it does today, and so leaned on it’s distant cousin SharePoint for Project Portal support. A lot of my customers as a consequence of this have significant document repositories in SharePoint Foundation instances installed as part of their Team Foundation Server installations. This in itself isn’t an issue, as you can decouple a SharePoint server from TFS and have the two operate independently of one another, but some users will miss the “Navigate to Project Portal” button from their project dashboard.

I found an easy answer to this is to use the Mark Down widget to give them the link they want/need so they can still access their project documentation, but of course the work item and report add-ins will cease to function. Encouraging users to adopt the newer TFS/Azure Dev Ops Dashboards is key here.

But enough waffling. A technical conundrum faced in this recent upgrade I thought worth mentioning was around the SharePoint integrations themselves. The customer was migrating from a 2012.4 server to a 2017.3 server (as tempting as 2018.3 was, IT governance in their organization meant SQL 2016 onwards was a no no). Their old TFS server was cohabiting with a SharePoint Foundation 2010 server which was their BA’s document repository and point of entry for having a look at the state of play for each Team Project. The server had two Team Project Collections for two different teams who were getting two shiny new TFS Servers.

One team was going to the brand new world and still needed the SharePoint integrations. The other was going to the new world with no SharePoint (huzzah!). Initially the customer had wanted to move the SharePoint server to one of the new servers, and thus be able to decommission the old Windows Server 2008R2 server it was living on.

This proved to be problematic in that the new server was on Windows Server 2016, which even had we upgraded SharePoint to the last version which supported TFS integration which was SharePoint 2013, 2013 is too old and creaky to run on Windows Server 2016. So we had to leave the SharePoint server where it was. This then presented a new problem. According to the official documentation, to point the SharePoint 2010 server at TFS 2017 I would have had to install the 2010 Connectivity Extensions which shipped with the version of TFS I was using. So that would be TFS 2017’s SharePoint 2010 extensions. But we already had a set of SharePoint 2010 extensions installed on that server….specifically TFS 2012’s SharePoint 2010 extensions.

To install 2017’s version of the extensions I’d have had to uninstall or upgrade TFS 2012.4 since two versions of TFS don’t really like cohabiting on the same server and the newer version tends to try (helpfully I should stress) upgrading the older version. The customer didn’t like the idea of their fall-back/mothballed server being rendered unusable if they needed to spin it up again. This left me in something of a pickle between a rock and a hard place.

So on a hunch I figured why not go off the beaten path and ignore the official documentation (something I don’t make a habit of), and suggested to the customer “let’s just point the TFS 2012’s SharePoint 2010 extensions at TFS 2017 and see what happens!”. So, I reconfigured the SharePoint 2010 Extensions the TFS 2017 Admin console and guess what? Worked like a charm. The SharePoint 2010 Project Portals quite happily talked to the TFS 2017 Server with no hiccups and no loss of functionality.

Were I to make an educated guess, I’d say the TFS 2017, SharePoint 2010 extensions are the same as TFS 2012’s but with a new sticker on the front, or only minor changes.

TFS 2018.3 Upgrade–Where’s my Code Content in the Code Hub?

Firstly it’s been a while (about 4 years or so) since I last blogged, and so there’s going to be a bit of a gap in what I blog about now versus what I used to blog about. Why the big gap, and what’s been happening between I might talk about in a future post but for now I thought I’d share a recent war story of a TFS Upgrade gone wrong, which had a pretty arcane error.

The client had previously been using an on-premise TFS 2012.4 server and had expressed an interest in upgrading to a newer version,  TFS 2018.3 to be precise.

Naturally our first recommendation was to move to Azure DevOps (formerly VSTS, VSO) but this wasn’t a good fit for them at the point in time, issues around Data sovereignty and all that and Brexit generally putting the spook into everyone in the UK who had been eyeing the European Data Centres.

Non the less we built some future proofing into the new TFS server we built and configured for them, chiefly using SQL Server 2017 to mitigate any problems with them updating to Azure DevOps Server 2019/2020 which is due for release before the end of the year, or just after the turn of the year if the previous release cadence of TFS is any indicator.

We performed a dry-run upgrade installation, and then a production run, the client I suspect brushed through their dry-run testing a little too quickly and failed to notice an issue which appeared in the production run.


There was no content in the Code Hub file content.

image


Opening the Editor also just showed a blank editing pane. Examining the page load using web debugging tools showed us the following javascript error.

image

: Script error for “BuiltInExtensions/Scripts/TFS.Extension”
http://requirejs.org/docs/errors.html#scripterror

We also saw certain page assets failing to load in the network trace.

image

Editor.main.css looked to be quite important given the context of the page we were in. It was also noted in the network trace we had quite a number of 401’s and many of the page assets were not displaying correctly in TFS (like the Management/Admin Cog, the TFS logo in the top right, folder and branch icons in the Code Hub source control navigator). We were stumped at first, a support call from Microsoft in the end illuminated us to the issue. The client had a group policy setting which prevented membership of a role assignment in Local Policies from being modified.

image

When adding the IIS feature to Windows Server, the local user group IIS_IUSRS normally gets added to this role. In the clients case, because of this group policy setting which prevented role assignments from being made, this had not occurred. No error had been raised in the feature enablement, and so no one knew anything had gone amiss when setting up the server.

This local user group contains (as I understand it) the application pool identities when creating application pools in IIS. TFS’s app pool needs this impersonation policy to load certain page assets as the currently signed in user. Some group policy changes later we were able to add the local group this by hand and resolve the issue (combined with an iisreset command). It’s been explained to me this used to be a more common error back in the days of TFS 2010 and 2012 but is something of a rarity these days hence no luck with any Google-Fu or other inquiries we made to the error.

Interestingly a week later I was performing another TFS Upgrade for a different client, they were going to TFS 2017.3 and the Code Hub, Extension Management, and Dashboards were affected by the same error, fortunately recent experience helped us resolve that quickly.


DDD Submission Issues

Some people have reported issues submitting sessions for DDD. if you do have a problem, please tweet us a message and we will get in touch

b.


DDD is now live for Session Submission!

I am delighted to announce the DDD site is now live for session submissions! Get in early to make sure you don’t forget. Always a great opportunity to get in front of an enthusiastic crowd.

Please do submit, and we would welcome first-timers. The audience is friendly, and so many speakers, that are now on the international circuit got their start at a DDD event. We hope to be able to do something more to support first-time speakers, so let us know if you fall into that category.

Once again DDD is free for all to attend, thanks to some great sponsors – if you would like to be sponsor DDD, please email ddd@blackmarble.com. Every bit of sponsorship helps make this a great event.

Smarter Cities Summit

I am delighted to be joining SixPivot, our amazing Australian partner, in taking part in the Safer Cities Summit in Brisbane this week.  We are launching tuServ, our award-winning policing solution at the event, into the Australian market, as the first step to taken it global!

I’ll be joined by Faith Rees, SixPivot’s CEO, and I am so excited to share with the Australian public sector the transformation that tuServ makes possible inside an organisation. 

The Public Sector Network’s Safer Cities Summit is an Australian based event focused on bringing together key civic safety focussed government stakeholders from around the world to tackle these important challenges. It is tailored towards Australian and international public safety, emergency and disaster response and urban resilience related stakeholders.


b.

DDD (South) is back again

 DDD12_thumb

After last year’s successful DDD, it will be in Reading for its thirteenth outing, on Saturday, 23th of June.

Once again DDD is free for all to attend, thanks to some great sponsors – if you would like to be sponsor DDD, please email ddd@blackmarble.com.

As ever I would love to see new speakers and new topics in the DDD line up, so please submit a session, this year for new speakers we will be trying to get support before the event to ensure they are super successful.

Session submission will open soon watch @DeveloperDay for more news

I look forward to seeing you all in June and we will be announcing more DDD dates in the next few weeks

b.