A task for documenting your Azure DevOps Pipeline extensions for YAML usage

I have posted in the past a quick script to generate some markdown documentation for the YAML usage of Azure DevOps Pipeline extensions. Well I decided that having this script as a task itself would be a good idea, so a wrote it, and please to say have just release it to the marketplace

The YAML Documenter task scans an extension’s vss-extension.json and task.json files to find the details it needs to build the markdown documentation on the YAML usage. It can also, optionally, copy the extension’s readme.md as the extensions primary documentation.

I am starting to use this extension, with my WIKIUpdater extension, in my release pipelines to make sure my extension’s GitHub WIki is up to date.

image

It is going to take a bit of work to update all my pipelines, but the eventual plan is to use the YAML document generator in the builds, adding the readme and YAML markdown files to the build as artefacts. Then deploying these files to the wiki in a later stage of the pipeline.

Hope some of you find it of use.

Programmatically adding User Capabilities to Azure DevOps Agents

I am automating the process by which we keep our build agent up to date. The basic process is to use a fork of the standard Microsoft Azure DevOps Pipeline agent that has the additional code included we need, notably Biztalk.

Once I have the Packer created VM up and running, I need to install the agent. This is well document, just run .config.cmd –help for details. However, there is no option to add user capabilities to the agent.

I know I could set them via environment variables, but I don’t want the same user capabilities on each agent on a VM (we use multiple agents on a single VM).

There was no documented Azure DevOps API I could find to add capabilities, but a bit of hacking around with Chrome Dev tools and Postman got me a solution, which I have provided a GIST

Azure Pipeline YAML support on VSCode

A major problem when moving from the graphic editing of Azure Pipeline builds to YAML has been the difficulty in knowing the options available, and of course making typos.

Microsoft have just released a VSCode extension to help address this problem – it is called Azure Pipelines

I have yet to give it a really good workout, but first impressions are good.

It does not remove the need for good documentation of task options, there is a need for my script to generate YAML documentation from a task.json file, but anything extra to ease editing helps.

Logic App Flat File Schemas and BizTalk Flat File Schemas

I recently started working on a new Logic App for a customer using the Enterprise Integration Pack for Visual Studio 2015, I was greeted with a familiar sight, the schema generation tools from BizTalk, but with a new lick of paint Open-mouthed smile

The Logic App requires the use of Flat File schemas, so I knocked up a schema from the  instance I’d been provided and went to validate it against the instance used to generate it (since it should validate).

My Flat File was a bit of a pain to be frank in that it had ragged endings, that is to say some sample rows might look a bit like:

1,2,3,4,5,6

1,2,3,4,5

1,2,3,4,5,6

Which I’ve worked with before….but couldn’t quite remember how I solved exactly other than tinkering with the element properties.

I generated the schema against the lines with the additional column and erroneously set the last field property Nillable to True. When I went to validate the instance lo’ and behold it wasn’t a valid instance and I had little information about why.

So I fired up my BizTalk 2013 R2 virtual machine (I could have used my 2016 one to be fair if I hadn’t sent it to the farm with old yellow last week) and rinsed and repeated the Flat File Schema Wizard.

So I got a bit more information this time, namely that the sample I’d been provided was missing a CR/LF on the final line and that the Nillable I’d set on the last column was throwing a wobbler by messing up the following lines.

Setting the field’s  Nillable property back to false, but it’s Min and Max occurs to 0 and 1 respectively and I had a valid working schema.

So I copied the schema back to my Logic Apps VM and attempted to revalidate my file (with its final line CR/LF amended). To my annoyance, invalid instance!

I was boggled at this quite frankly but some poking around the internet led me to this fellow’s Blog.

https://blogs.msdn.microsoft.com/david_burgs_blog/2018/03/26/generate-and-validate-flat-file-native-instances-from-flat-file-schemas/

In short there’s an attribute added on a Flat File schema which denotes the extension class to be used by the schema editor, when built by a BizTalk Flat File Schema Wizard it’s set to

Microsoft.BizTalk.FlatFileExtension.FlatFileExtension

When generated by the Enterprise Integration Pack it’s

Microsoft.Azure.Integration.DesignTools.FlatFileExtension.FlatFileExtension

Changing this attribute value in my BizTalk generated Flat File schema and presto, the schema could validate the instance it was generated from.

So in short I’ll say the flavour of the Schema designer tools in the Enterprise Integration Pack seem to throw out errors and verbosity a little different to its BizTalk ancestor, it still throws out mostly the same information, but in different places:

  • EIP

You only get a generic error message in the output log. Go and examine the errors log for more information.

image

  • BizTalk

You get the errors in the output log, and in the error log.

image

In my case the combination of my two errors (the flat file being malformed slightly, and my Nillable field change) in the EIP only gave me an error “Root element is missing” which wasn’t particularly helpful and the BizTalk tooling did give me a better fault diagnosis.

On the bright side the two are more or less interchangeable. Something to bear in mind if you’re struggling with a Flat Schema and have a BizTalk development environment on hand.

Keeping Azure DevOps organisations inherited process templates in sync

The problem

If you are like me for historic reasons you have multiple Azure DevOps organisations (instances) backed by the same Azure Active Directory (AAD). In my case for example: one was created when Azure DevOps was first released as TFSPreview.com and another is from our migration from on-prem TFS using the DB Migration Tools method; and I have others.

I make active use of all of these for different purposes, though one is primary with the majority of work done on it, and so I want to make sure the inherited process templates are the same on each of them. Using the primary organisation as the master customisation.

Note I have already converted all my old on-premises XML process models to inherited process templates.

There is no out the box way to do keep processes in syncs, but it is possible using a few tools. The main one is the Microsoft Process Migrator for Node on GitHub.

The Solution

Firstly I cloned the Microsoft Process Migrator and built it as per the instructions on the repo.

I created a config file and then ran the tool. On one organisation it ran fine. However on another I had errors like:

[ERROR] [2018-11-26T14:35:44.880Z] Process import validation failed. Process with same name already exists on target account.
[ERROR] [2018-11-26T14:39:54.206Z] Import failed, see log file for details. Create field ‘Location’ failed, see log for details

This was because I had in the past manually duplicated the inherited process template onto this organisation, so there was a process with the same name and fields of the same names.

The first error was easy to fix, import the template with a new (temporary) name.

The second is more problematic. I had two choice

As I only had a few duplicated unused fields on a single organisation I picked the former. If I had many organisations to sort out I would picked the latter.

So my process ended up being

  1. Run the Microsoft Process Migrator to migrate ‘My Process’ on the source organisation to ‘My Process 1’ on the target organisation
  2. It gave an error, providing the name of the duplicated field
  3. I checked on the target organisation using a work item query that the field was empty or only had defaulted data (if it had not been I would have used Martin’s tool to migrate the data to a temporary field and then deleted the problem field, moving the data back to the correct field from the temporary field when the import of the process template was completed)
  4. I deleted the field from the work item type that referenced it
  5. I deleted the field
  6. I deleted the process template ‘My Process 1’, a failed import leaves a half created process
  7. I went back to step 1 and repeated until the import completed without error
  8. I tested my migrated inherited process was OK
  9. On the target organisation I then renamed ‘My Process’ to ‘My Process – Old’
  10. I then renamed ‘My Process 1’ to ‘My Process’
  11. In my case I also made ‘My Process’ as the default, you might not do this if another process is the default, but step 13 does require the process template is not the default
  12. I moved all the team projects using the process template now called ‘My Process – Old’ to ‘My Process’
  13. I was then able to delete the process template ‘My Process – Old’ as it has no associated team projects and was not the default

As I customise my primary organisation’s process templates I can repeat this process to keep the processes in sync between organisations.  Note that in future migrations I won’t have to do steps 2..6 as there are no manually created duplicated fields. So it should be more straight forward.

So a valid solution until any similar functionality is built into Azure DevOps, and there is no sign of that on the roadmap.

DPI problems after upgrading from Camtasia 8 to 2018

This is another of those posts I do so I don’t forget how I fixed something.

I have a requirement to record videos for a client in 720p resolution. As I use as SurfaceBook with a High-Res screen I have found the best way to do this is set my Windows screen resolution to 1280×720 and do all my recording at this as native resolution. Any attempt to record smaller portions of a screen or scale video in production have lead to quality problems, especially as remote desktops within remote desktops are required.

This has been working fine with Camtasia 8, but when I upgrade to Camtasia  2018.0.7 I got problems. The whole UI of the tool was unusable, it ignored the resizing/DPI changes.

The only fix I could find was to create a desktop shortcut to the EXE and set the Properties > Compatibility > Change high DPI settings > and check the ‘Override high DPI scaling behaviour’ and set this to ‘System’.

image

Even after doing this I still found the preview in the editing screen a little blurred, but usable. The final produced MP4s were OK.

Just released a new Azure Pipelines Extension to update Git based WIKIs

I have just release a new Azure DevOps Pipelines extension to update a page in a Git based WIKI. 

It has been tested again

  • Azure DevOps WIKI – running as the build agent (so the same Team Project)
  • Azure DevOps WIKI – using provided credentials (so any Team Project)
  • GitHub – using provided credentials

It takes a string (markdown) input and writes it to a new page, or updates it if it already exists. It is designed to be used with my Generate Release Notes Extension, but you will no doubt find other uses

SharePoint 2010 Integrations with TFS 2017.3

An interesting aspect of a TFS Upgrade I found myself in recently was to do with SharePoint Integrations with Team Foundation Server. As a bit of a pre-amble, back in the day, when TFS was a bit more primitive, it didn’t have a rich dashboard experience like it does today, and so leaned on it’s distant cousin SharePoint for Project Portal support. A lot of my customers as a consequence of this have significant document repositories in SharePoint Foundation instances installed as part of their Team Foundation Server installations. This in itself isn’t an issue, as you can decouple a SharePoint server from TFS and have the two operate independently of one another, but some users will miss the “Navigate to Project Portal” button from their project dashboard.

I found an easy answer to this is to use the Mark Down widget to give them the link they want/need so they can still access their project documentation, but of course the work item and report add-ins will cease to function. Encouraging users to adopt the newer TFS/Azure Dev Ops Dashboards is key here.

But enough waffling. A technical conundrum faced in this recent upgrade I thought worth mentioning was around the SharePoint integrations themselves. The customer was migrating from a 2012.4 server to a 2017.3 server (as tempting as 2018.3 was, IT governance in their organization meant SQL 2016 onwards was a no no). Their old TFS server was cohabiting with a SharePoint Foundation 2010 server which was their BA’s document repository and point of entry for having a look at the state of play for each Team Project. The server had two Team Project Collections for two different teams who were getting two shiny new TFS Servers.

One team was going to the brand new world and still needed the SharePoint integrations. The other was going to the new world with no SharePoint (huzzah!). Initially the customer had wanted to move the SharePoint server to one of the new servers, and thus be able to decommission the old Windows Server 2008R2 server it was living on.

This proved to be problematic in that the new server was on Windows Server 2016, which even had we upgraded SharePoint to the last version which supported TFS integration which was SharePoint 2013, 2013 is too old and creaky to run on Windows Server 2016. So we had to leave the SharePoint server where it was. This then presented a new problem. According to the official documentation, to point the SharePoint 2010 server at TFS 2017 I would have had to install the 2010 Connectivity Extensions which shipped with the version of TFS I was using. So that would be TFS 2017’s SharePoint 2010 extensions. But we already had a set of SharePoint 2010 extensions installed on that server….specifically TFS 2012’s SharePoint 2010 extensions.

To install 2017’s version of the extensions I’d have had to uninstall or upgrade TFS 2012.4 since two versions of TFS don’t really like cohabiting on the same server and the newer version tends to try (helpfully I should stress) upgrading the older version. The customer didn’t like the idea of their fall-back/mothballed server being rendered unusable if they needed to spin it up again. This left me in something of a pickle between a rock and a hard place.

So on a hunch I figured why not go off the beaten path and ignore the official documentation (something I don’t make a habit of), and suggested to the customer “let’s just point the TFS 2012’s SharePoint 2010 extensions at TFS 2017 and see what happens!”. So, I reconfigured the SharePoint 2010 Extensions the TFS 2017 Admin console and guess what? Worked like a charm. The SharePoint 2010 Project Portals quite happily talked to the TFS 2017 Server with no hiccups and no loss of functionality.

Were I to make an educated guess, I’d say the TFS 2017, SharePoint 2010 extensions are the same as TFS 2012’s but with a new sticker on the front, or only minor changes.

TFS 2018.3 Upgrade–Where’s my Code Content in the Code Hub?

Firstly it’s been a while (about 4 years or so) since I last blogged, and so there’s going to be a bit of a gap in what I blog about now versus what I used to blog about. Why the big gap, and what’s been happening between I might talk about in a future post but for now I thought I’d share a recent war story of a TFS Upgrade gone wrong, which had a pretty arcane error.

The client had previously been using an on-premise TFS 2012.4 server and had expressed an interest in upgrading to a newer version,  TFS 2018.3 to be precise.

Naturally our first recommendation was to move to Azure DevOps (formerly VSTS, VSO) but this wasn’t a good fit for them at the point in time, issues around Data sovereignty and all that and Brexit generally putting the spook into everyone in the UK who had been eyeing the European Data Centres.

Non the less we built some future proofing into the new TFS server we built and configured for them, chiefly using SQL Server 2017 to mitigate any problems with them updating to Azure DevOps Server 2019/2020 which is due for release before the end of the year, or just after the turn of the year if the previous release cadence of TFS is any indicator.

We performed a dry-run upgrade installation, and then a production run, the client I suspect brushed through their dry-run testing a little too quickly and failed to notice an issue which appeared in the production run.


There was no content in the Code Hub file content.

image


Opening the Editor also just showed a blank editing pane. Examining the page load using web debugging tools showed us the following javascript error.

image

: Script error for “BuiltInExtensions/Scripts/TFS.Extension”
http://requirejs.org/docs/errors.html#scripterror

We also saw certain page assets failing to load in the network trace.

image

Editor.main.css looked to be quite important given the context of the page we were in. It was also noted in the network trace we had quite a number of 401’s and many of the page assets were not displaying correctly in TFS (like the Management/Admin Cog, the TFS logo in the top right, folder and branch icons in the Code Hub source control navigator). We were stumped at first, a support call from Microsoft in the end illuminated us to the issue. The client had a group policy setting which prevented membership of a role assignment in Local Policies from being modified.

image

When adding the IIS feature to Windows Server, the local user group IIS_IUSRS normally gets added to this role. In the clients case, because of this group policy setting which prevented role assignments from being made, this had not occurred. No error had been raised in the feature enablement, and so no one knew anything had gone amiss when setting up the server.

This local user group contains (as I understand it) the application pool identities when creating application pools in IIS. TFS’s app pool needs this impersonation policy to load certain page assets as the currently signed in user. Some group policy changes later we were able to add the local group this by hand and resolve the issue (combined with an iisreset command). It’s been explained to me this used to be a more common error back in the days of TFS 2010 and 2012 but is something of a rarity these days hence no luck with any Google-Fu or other inquiries we made to the error.

Interestingly a week later I was performing another TFS Upgrade for a different client, they were going to TFS 2017.3 and the Code Hub, Extension Management, and Dashboards were affected by the same error, fortunately recent experience helped us resolve that quickly.


DDD North is Coming

I am super happy to announce that DDD North is set to appear on February the 9th in the Sunny Northern City of Hull

The Amazing people at Hull University have arranged the venue and we will be shortly announcing call for sessions and dates for ticket availability.

We had hoped to have DDD this side of Christmas but with such a great opportunity available we felt we could wait a bit

I look forward to seeing a lot of old and new faces at the next DDD

b.