It was great to see that the Windows Admin Center is now being updated automatically with other Windows Server OS updates when I updated the server on which it is installed the other day:
One fewer thing to have to check for manually!
I am super happy to announce that DDD North 2019 is Go.
Apologies for the slight false start but DDD North 2019 will be hosted in the fair city of Hull at the University of Hull on the 2nd of March
Submissions for sessions is open now on the DDDNorth site http://www.dddnorth.co.uk/
Please submit a session and build an amazing developer day
I have posted in the past a quick script to generate some markdown documentation for the YAML usage of Azure DevOps Pipeline extensions. Well I decided that having this script as a task itself would be a good idea, so a wrote it, and please to say have just release it to the marketplace
The YAML Documenter task scans an extension’s vss-extension.json and task.json files to find the details it needs to build the markdown documentation on the YAML usage. It can also, optionally, copy the extension’s readme.md as the extensions primary documentation.
It is going to take a bit of work to update all my pipelines, but the eventual plan is to use the YAML document generator in the builds, adding the readme and YAML markdown files to the build as artefacts. Then deploying these files to the wiki in a later stage of the pipeline.
Hope some of you find it of use.
I am automating the process by which we keep our build agent up to date. The basic process is to use a fork of the standard Microsoft Azure DevOps Pipeline agent that has the additional code included we need, notably Biztalk.
Once I have the Packer created VM up and running, I need to install the agent. This is well document, just run .config.cmd –help for details. However, there is no option to add user capabilities to the agent.
I know I could set them via environment variables, but I don’t want the same user capabilities on each agent on a VM (we use multiple agents on a single VM).
There was no documented Azure DevOps API I could find to add capabilities, but a bit of hacking around with Chrome Dev tools and Postman got me a solution, which I have provided a GIST
A major problem when moving from the graphic editing of Azure Pipeline builds to YAML has been the difficulty in knowing the options available, and of course making typos.
Microsoft have just released a VSCode extension to help address this problem – it is called Azure Pipelines
I have yet to give it a really good workout, but first impressions are good.
It does not remove the need for good documentation of task options, there is a need for my script to generate YAML documentation from a task.json file, but anything extra to ease editing helps.
I recently started working on a new Logic App for a customer using the Enterprise Integration Pack for Visual Studio 2015, I was greeted with a familiar sight, the schema generation tools from BizTalk, but with a new lick of paint
The Logic App requires the use of Flat File schemas, so I knocked up a schema from the instance I’d been provided and went to validate it against the instance used to generate it (since it should validate).
My Flat File was a bit of a pain to be frank in that it had ragged endings, that is to say some sample rows might look a bit like:
Which I’ve worked with before….but couldn’t quite remember how I solved exactly other than tinkering with the element properties.
I generated the schema against the lines with the additional column and erroneously set the last field property Nillable to True. When I went to validate the instance lo’ and behold it wasn’t a valid instance and I had little information about why.
So I fired up my BizTalk 2013 R2 virtual machine (I could have used my 2016 one to be fair if I hadn’t sent it to the farm with old yellow last week) and rinsed and repeated the Flat File Schema Wizard.
So I got a bit more information this time, namely that the sample I’d been provided was missing a CR/LF on the final line and that the Nillable I’d set on the last column was throwing a wobbler by messing up the following lines.
Setting the field’s Nillable property back to false, but it’s Min and Max occurs to 0 and 1 respectively and I had a valid working schema.
So I copied the schema back to my Logic Apps VM and attempted to revalidate my file (with its final line CR/LF amended). To my annoyance, invalid instance!
I was boggled at this quite frankly but some poking around the internet led me to this fellow’s Blog.
In short there’s an attribute added on a Flat File schema which denotes the extension class to be used by the schema editor, when built by a BizTalk Flat File Schema Wizard it’s set to
When generated by the Enterprise Integration Pack it’s
Changing this attribute value in my BizTalk generated Flat File schema and presto, the schema could validate the instance it was generated from.
So in short I’ll say the flavour of the Schema designer tools in the Enterprise Integration Pack seem to throw out errors and verbosity a little different to its BizTalk ancestor, it still throws out mostly the same information, but in different places:
You only get a generic error message in the output log. Go and examine the errors log for more information.
You get the errors in the output log, and in the error log.
In my case the combination of my two errors (the flat file being malformed slightly, and my Nillable field change) in the EIP only gave me an error “Root element is missing” which wasn’t particularly helpful and the BizTalk tooling did give me a better fault diagnosis.
On the bright side the two are more or less interchangeable. Something to bear in mind if you’re struggling with a Flat Schema and have a BizTalk development environment on hand.
If you are like me for historic reasons you have multiple Azure DevOps organisations (instances) backed by the same Azure Active Directory (AAD). In my case for example: one was created when Azure DevOps was first released as TFSPreview.com and another is from our migration from on-prem TFS using the DB Migration Tools method; and I have others.
I make active use of all of these for different purposes, though one is primary with the majority of work done on it, and so I want to make sure the inherited process templates are the same on each of them. Using the primary organisation as the master customisation.
Note I have already converted all my old on-premises XML process models to inherited process templates.
There is no out the box way to do keep processes in syncs, but it is possible using a few tools. The main one is the Microsoft Process Migrator for Node on GitHub.
Firstly I cloned the Microsoft Process Migrator and built it as per the instructions on the repo.
I created a config file and then ran the tool. On one organisation it ran fine. However on another I had errors like:
[ERROR] [2018-11-26T14:35:44.880Z] Process import validation failed. Process with same name already exists on target account.
[ERROR] [2018-11-26T14:39:54.206Z] Import failed, see log file for details. Create field ‘Location’ failed, see log for details
This was because I had in the past manually duplicated the inherited process template onto this organisation, so there was a process with the same name and fields of the same names.
The first error was easy to fix, import the template with a new (temporary) name.
The second is more problematic. I had two choice
As I only had a few duplicated unused fields on a single organisation I picked the former. If I had many organisations to sort out I would picked the latter.
So my process ended up being
As I customise my primary organisation’s process templates I can repeat this process to keep the processes in sync between organisations. Note that in future migrations I won’t have to do steps 2..6 as there are no manually created duplicated fields. So it should be more straight forward.
So a valid solution until any similar functionality is built into Azure DevOps, and there is no sign of that on the roadmap.
This is another of those posts I do so I don’t forget how I fixed something.
I have a requirement to record videos for a client in 720p resolution. As I use as SurfaceBook with a High-Res screen I have found the best way to do this is set my Windows screen resolution to 1280×720 and do all my recording at this as native resolution. Any attempt to record smaller portions of a screen or scale video in production have lead to quality problems, especially as remote desktops within remote desktops are required.
This has been working fine with Camtasia 8, but when I upgrade to Camtasia 2018.0.7 I got problems. The whole UI of the tool was unusable, it ignored the resizing/DPI changes.
The only fix I could find was to create a desktop shortcut to the EXE and set the Properties > Compatibility > Change high DPI settings > and check the ‘Override high DPI scaling behaviour’ and set this to ‘System’.
Even after doing this I still found the preview in the editing screen a little blurred, but usable. The final produced MP4s were OK.
I have just release a new Azure DevOps Pipelines extension to update a page in a Git based WIKI.
It has been tested again
It takes a string (markdown) input and writes it to a new page, or updates it if it already exists. It is designed to be used with my Generate Release Notes Extension, but you will no doubt find other uses
An interesting aspect of a TFS Upgrade I found myself in recently was to do with SharePoint Integrations with Team Foundation Server. As a bit of a pre-amble, back in the day, when TFS was a bit more primitive, it didn’t have a rich dashboard experience like it does today, and so leaned on it’s distant cousin SharePoint for Project Portal support. A lot of my customers as a consequence of this have significant document repositories in SharePoint Foundation instances installed as part of their Team Foundation Server installations. This in itself isn’t an issue, as you can decouple a SharePoint server from TFS and have the two operate independently of one another, but some users will miss the “Navigate to Project Portal” button from their project dashboard.
I found an easy answer to this is to use the Mark Down widget to give them the link they want/need so they can still access their project documentation, but of course the work item and report add-ins will cease to function. Encouraging users to adopt the newer TFS/Azure Dev Ops Dashboards is key here.
But enough waffling. A technical conundrum faced in this recent upgrade I thought worth mentioning was around the SharePoint integrations themselves. The customer was migrating from a 2012.4 server to a 2017.3 server (as tempting as 2018.3 was, IT governance in their organization meant SQL 2016 onwards was a no no). Their old TFS server was cohabiting with a SharePoint Foundation 2010 server which was their BA’s document repository and point of entry for having a look at the state of play for each Team Project. The server had two Team Project Collections for two different teams who were getting two shiny new TFS Servers.
One team was going to the brand new world and still needed the SharePoint integrations. The other was going to the new world with no SharePoint (huzzah!). Initially the customer had wanted to move the SharePoint server to one of the new servers, and thus be able to decommission the old Windows Server 2008R2 server it was living on.
This proved to be problematic in that the new server was on Windows Server 2016, which even had we upgraded SharePoint to the last version which supported TFS integration which was SharePoint 2013, 2013 is too old and creaky to run on Windows Server 2016. So we had to leave the SharePoint server where it was. This then presented a new problem. According to the official documentation, to point the SharePoint 2010 server at TFS 2017 I would have had to install the 2010 Connectivity Extensions which shipped with the version of TFS I was using. So that would be TFS 2017’s SharePoint 2010 extensions. But we already had a set of SharePoint 2010 extensions installed on that server….specifically TFS 2012’s SharePoint 2010 extensions.
To install 2017’s version of the extensions I’d have had to uninstall or upgrade TFS 2012.4 since two versions of TFS don’t really like cohabiting on the same server and the newer version tends to try (helpfully I should stress) upgrading the older version. The customer didn’t like the idea of their fall-back/mothballed server being rendered unusable if they needed to spin it up again. This left me in something of a pickle between a rock and a hard place.
So on a hunch I figured why not go off the beaten path and ignore the official documentation (something I don’t make a habit of), and suggested to the customer “let’s just point the TFS 2012’s SharePoint 2010 extensions at TFS 2017 and see what happens!”. So, I reconfigured the SharePoint 2010 Extensions the TFS 2017 Admin console and guess what? Worked like a charm. The SharePoint 2010 Project Portals quite happily talked to the TFS 2017 Server with no hiccups and no loss of functionality.
Were I to make an educated guess, I’d say the TFS 2017, SharePoint 2010 extensions are the same as TFS 2012’s but with a new sticker on the front, or only minor changes.