Zwift and the joys of home networking

During the Covid 19 lock down I have been doing plenty of Zwift‘ing. However, I have started having problems getting the Zwift Companion App working reliably, when it used to work.

Basically, Zwift itself was fine, though very slow to save when exiting, but the companion app could not seem to detect that I was actively Zwift’ing, but it’s other functions were OK.

After much fiddling I found the issue was the network connection from my PC up to Zwift and nothing to do with the phone app. But in case it is of any use to others here are the steps I took to

  • Ran a WiFi network analysis app and realised that
    • My local wireless environment is now very congested, I assume as more people are working from home.
    • Both the 2.4GHz and 5Ghz network were on the same channels as other strong signals.
    • Also they were using the same SSID, which is meant to provide seamless swap-over between 2.4 and 5Ghz. But, in reality this meant there were connection problems as a connection flipped between frequencies.

This explained other problems I had seem

  • The Microsoft Direct Access VPN I use to connect to the office failing intermittently. Obviously, any problems I have connecting to the office to do work is far less important than Zwift connection issues.
  • My Samsung phone would drop calls for no reason. I now think this was when it had decided to use Wifi calling and got confused over networks.
    Note: I had fixed this by switching off Wifi calling.

To address the problems I changed the SSIDs so that my 2.4 and 5Ghz networks had different names, so that I know which one I was using. Also I moved the channels to ones not used by my neighbours

TestResult
Put the phone and the PC on the 2.4Ghz networkNo improvement, app did not work and PC slow to save
Put the phone and the PC on 5GhzSmall improvement, app still did not work but at least tried to show the in game view before it dropped out. The PC was still slow to save
Put the phone on either Wifi network but the the PC on Ethernet over Power using TPLink adaptorsThis fixed it

So it seems the problem was upload speed from my PC all along. Strange as I would have expected the 5Ghz network to be fine, even if the 2.4Ghz was not. The 5Ghz Wifi seems to perform OK on a speed test.

Anyway it is working now, but maybe it is time to consider a proper mesh network?

Bringing Stage based release notes in Multi-Stage YAML to my Cross Platform Release Notes Exension

I have just released Version 3.1.7 of my Azure DevOps Pipeline XplatGenerateReleaseNotes Extension.

This new version allows you to build release notes within a Multi-Stage YAML build since the last successful release to the current (or named) stage in the pipeline as opposed to just last fully successful build.

This gives more feature parity with the older UI based Releases functionality.

To enable this new feature you need to set the checkStage: true flag and potentially the overrideStageName: AnotherStage if you wish the comparison to compare against a stage other than the current one.

- task: XplatGenerateReleaseNotes@3
  inputs:
    outputfile: '$(Build.ArtifactStagingDirectory)\releasenotes.md'
    outputVariableName: 'outputvar'
    templateLocation: 'InLine'
    checkStage: true
    inlinetemplate: |
      # Notes for build 
      **Build Number**: {{buildDetails.id}}
      ...

Timeout Errors ‘Extracting Schema’ when running SQLPackage for a Migration to Azure DevOps Services

The Problem

Whilst doing a migration from an on-premised TFS to Azure DevOps Services for a client I had a strange issue with SQLPackage.exe.

I had previously completed the dry run of the migration without any issues and started the live migration with a fully defined process and timings for a each stage.

When I came to export the detached Team Project Collection DB I ran the same command as I had for the dry run

& "C:\Program Files\Microsoft SQL Server\150\DAC\bin\SqlPackage.exe" /sourceconnectionstring:”Data Source=localhost\SQLExpress;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True” /targetFile:C:\temp\Tfs_DefaultCollection.dacpac /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory 

I had expected this to take around 30 minutes. However it failed after 10 minutes with an error when trying to export the schema ‘Timeout, cannot reconnect to the database’.

This was strange as nothing had changed on the system since the dry-run. I tried all of the following with no effect

  • Just running the command again, you can hope!
  • Restarted SQL and ran the command again
  • Tried the export from SQL Management Studio as opposed to the command line , this just seems to hang at the same point.

The Solution

What resolved the problem was a complete reboot of the virtual machine. I assume the issue was some locked resource, but not idea why.

Getting started with Aggregator CLI for Azure DevOps Work Item Roll-up

Back in the day I wrote a tool, TFS Alerts DSL, to do Work Item roll-up for TFS. Overtime I updated this to support VSTS (as Azure DevOps was then called), it’s final version is still available in the Azure DevOps Marketplace as the Azure DevOps Service Hooks DSL. So when I recently had a need for Work Item roll-up I did consider using my own tool, just for a short while. However, I quickly realised a much better option was to use the Aggregator CLI. This is a successor to the TFS Aggregator Plug-in and is a far more mature project than my tool and actively under development.

However, I have found the Aggregator CLI a little hard to get started with. The best ‘getting started’ documentation seems to be in the command examples, but I is not that easy to find. So I thought this blog post was a good idea, so I don’t forget the details in the future.

Architecture

In this latest version of the Aggregator the functionality is delivered using Azure Functions, one per rule. These are linked to Azure DevOps Service hook events. The command line tool setup process configures all of the parts required setting up Azure resources, Azure DevOps events and managing rules.

Preparation

    1. Open the  Azure Portal
    2. Select the Azure Active Directory (AAD) instance to create an App Registration in.
    3. Create a new App Registration
      1. Create a new app registration
      2. Provide name, you can leave the rest as defaults
      3. Press Register
    4. From the root of the Azure Portal pick the Subscription you wish to create the Azure Functions in
    5. In the Access (IAM ) section grant the ‘contributor role’ for the subscription to the newly created App Registration.

Using the Aggregator CLI

At a command prompt we need to now start to use the tool to link up Azure Services and Azure DevOps

  • First we log the CLI tool into Azure. You can find the values required from Azure Portal, in the Subscription overview and App Registration overview. You create a password from ‘client and secrets’ section for the App Registration.

.\aggregator-cli.exe logon.azure –subscription <sub-id> –client <client-id> –tenant <tenant-id> –password <pwd>

.\aggregator-cli.exe logon.ado –url https://dev.azure.com/<org> –mode PAT –token <pat>

  • Now we can create the Instance of the Aggregator in  Azure

    Note: I had ling delays and timeout problems here due to what turned our to be a  poor WIFI link. The strange thing was it was not obviously failing WIFI but just unstable enough to cause issues. As soon as I swapped to Ethernet the problems went away.

    The basic form of the command is as follows, this will create a new resource group in Azure and then the required Web App, Storage, Application Insights etc. As this is  done using an ARM template so it is idempotent i.e. it can re run as many times as you wish, it will just update the Azure services if they already exist.

    .\aggregator-cli.exe install.instance –verbose –name yourinstancename –location westeurope

  • When this completes, you can see the new resources in the Azure Portal, or check them with command line

    .\aggregator-cli.exe list.instances

  • You next need to register your rules. You can register as many as you wish. A few samples are provided in the \test folder in the downloaded ZIP, these are good for a quick tests, thought you will usually create your own for production use. When you add a rule, behind the scenes this creates an Azure Function with the same name as the rule.

    .\aggregator-cli.exe add.rule –verbose –instance yourinstancename –name test1 –file test\test1.rule

  • Finally you map a rule to some event in Azure DevOps instance

    .\aggregator-cli.exe map.rule –verbose –project yourproject –event workitem.updated –instance rfado –rule test1

And once all this done you should have a working system. If you are using the the test rules then quickest option to see it is working is to

  1. Go into the Azure Portal
  2. Find the created Resource Group
  3. Pick the App Service for the Azure Functions
  4. Pick the Function for the rule under test
  5. Pick the Monitor
  6. Pick Logs
  7. Open Live Metric
  8. You should see log entries when you perform the event on a work item you mapped to the function.

So I hope this helps my future self remember how get this tool setup quickly

How to do local template development for my Cross platform Release notes task

The testing cycle for Release Notes Templates can be slow, requiring a build and release cycle. To try to speed this process for users I have created a local test harness that allows the same calls to be made from a development machine as would be made within a build or release.

However, running this is not as simple was you might expect so please read the instruction before proceeding.

Setup and Build

  1. Clone the repo contain the Azure DevOps Extension.
  2. Change to the folder

    <repo root>Extensions\XplatGenerateReleaseNotes\V2\testconsole

  3. Build the tool using NPM (this does assume Node is already installed)

    npm install
    npm run build

Running the Tool

The task the testconsole runs takes many parameters, and reads runtime Azure DevOps environment variable. These have to be passing into the local tester. Given the number, and the fact that most probably won’t need to be altered, they are provided in settings JSON file. Samples are provided for a build and a release. For details on these parameters see the task documentation

The only values not stored in the JSON files are the PATs required to access the REST API. This reduces the chance of them being copied onto source control by mistake.

Two PATs are potentially used.

  • Azure DevOps PAT (Required) – within a build or release this is automatically picked up. For this tool it must be provided
  • GitHub PAT – this is an optional parameter for the task, you only need to provide it if working with private GitHub repos as your code store. So usually this can be ignored.

Test Template Generation for a Build

To run the tool against a build

  1. In the settings file make sure the TeamFoundationCollectionUri, TeamProject and BuildID are set to the build you wish to run against, and that the ReleaseID is empty.
  2. Run the command

    node .\GenerateReleaseNotesConsoleTester.js build-settings.json <your-Azure-DevOps-PAT> <Optional: your GitHub PAT>

  3. Assuming you are using the sample settings you should get an output.md file with your release notes.

Test Template Generation for a Release

To run the tool against a release is but more complex. This is because the logic looks back to see the most recent successful run. So if your release ran to completion you will get no notes as there has been no changes it it is the last successful release.

You have two options

  • Allow a release  to trigger, but cancel it. You can then use its ReleaseID to compare with the last release
  • Add a stage to your release this is skipped, only run on a manual request and use this as the comparison stage to look for difference

To run the tool

  1. In the settings file make sure the TeamFoundationCollectionUri, TeamProject, BuildID, EnvironmentName (as stage in your process), ReleaseID and releaseDefinitionId are set for the release you wish to run against.
  2. Run the command

    node .\GenerateReleaseNotesConsoleTester.js release-settings.json <your-Azure-DevOps-PAT> <Optional: yourGitHub PAT>

  3. Assuming you are using the sample settings you should get an output.md file with your release notes.

Hope you find it useful

New feature for Cross Platform Release notes – get parent and child work items

I have added another new feature to my Cross Platform release note generator. Now, when using Handlebars based templates you can optionally get the parent or child work items for any work item associated with build/release

To enable the feature, as it is off by default, you need to set the  getParentsAndChildren: true parameter for the task, either in YAML or in the handlebars section of the configuration.

This will add an extra array that the template can access relatedWorkItems. This contains all the work items associated with the build/release plus their direct parents and children. This can then be accessed in the template

{{#forEach this.workItems}}

{{#if isFirst}}### WorkItems {{/if}}

* **{{this.id}}**  {{lookup this.fields 'System.Title'}}

- **WIT** {{lookup this.fields 'System.WorkItemType'}}

- **Tags** {{lookup this.fields 'System.Tags'}}

- **Assigned** {{#with (lookup this.fields 'System.AssignedTo')}} {{displayName}} {{/with}}

- **Description** {{{lookup this.fields 'System.Description'}}}

- **Parents**

{{#forEach this.relations}}

{{#if (contains this.attributes.name 'Parent')}}

{{#with (lookup_a_work_item ../../relatedWorkItems  this.url)}}

      - {{this.id}} - {{lookup this.fields 'System.Title'}}

{{/with}}

{{/if}}

{{/forEach}}

- **Children**

{{#forEach this.relations}}

{{#if (contains this.attributes.name 'Child')}}

{{#with (lookup_a_work_item ../../relatedWorkItems  this.url)}}

      - {{this.id}} - {{lookup this.fields 'System.Title'}}

{{/with}}

{{/if}}

{{/forEach}}

{{/forEach}}

This is a complex way to present the extra work items, but very flexible.

Hope people find the new feature useful.

And another new feature for my Cross Platform Release Notes Azure DevOps Task – commit/changeset file details

The addition of Handlebars based templating for my Cross Platform Release Notes Task has certainly made it much easier to release new features. The legacy templating model it seem is what had been holding development back.

In the past month or so I have added support for generating release notes based on PRs and Tests. I am now happy to say I have just added support for the actual files associated with a commit or changeset.

Enriching the commit/changeset data with the details of the files edited has been a repeated request over the years. The basic commit/changeset object only detailed the commit message and the author. With this new release of my task there is now a .changes property on the commit objects that exposes the details of the actual files in the commit/changeset.

This is used in Handlebars based template as follows

# Global list of CS ({{commits.length}})
{{#forEach commits}}
{{#if isFirst}}### Associated commits{{/if}}
* ** ID{{this.id}}** 
   -  **Message:** {{this.message}}
   -  **Commited by:** {{this.author.displayName}} 
   -  **FileCount:** {{this.changes.length}} 
{{#forEach this.changes}}
      -  **File path (use this for TFVC or TfsGit):** {{this.item.path}}  
      -  **File filename (using this for GitHub):** {{this.filename}}  
      -  **this will show all the properties available for file):** {{json this}}  
{{/forEach}}. 
{{/forEach}}

Another feature for my Cross Platform Release Notes Azure DevOps Extension–access to test results

Over the weekend I got another new feature for my Cross Platform Release Notes Azure DevOps Extension working. The test results associated with build artefacts or releases are now exposed to Handlebars based templates.

The new objects you can access are:

  • In builds
    • tests – all the test run as part of current build
  • In releases
    • tests – all the test run as part of any current build artefacts or previous to the running of the release notes task within a release environment
    • releaseTests – all the test run within a release environment
    • builds.test – all the test run as part of any build artefacts group by build artefact

These can be used as follows in a release template

# Builds with associated WI/CS/Tests ({{builds.length}})

{{#forEach builds}}

{{#if isFirst}}## Builds {{/if}}

##  Build {{this.build.buildNumber}}

{{#forEach this.commits}}

{{#if isFirst}}### Commits {{/if}}

- CS {{this.id}}

{{/forEach}}

{{#forEach this.workitems}}

{{#if isFirst}}### Workitems {{/if}}

- WI {{this.id}}

{{/forEach}}

{{#forEach this.tests}}

{{#if isFirst}}### Tests {{/if}}

- Test {{this.id}}

-  Name: {{this.testCase.name}}

-  Outcome: {{this.outcome}}

{{/forEach}}

{{/forEach}}


# Global list of tests ({{tests.length}})

{{#forEach tests}}

{{#if isFirst}}### Tests {{/if}}

* ** ID{{this.id}}**

-  Name: {{this.testCase.name}}

-  Outcome: {{this.outcome}}

{{/forEach}}


For more details see the documentation in the WIKI

Running SonarQube for a .NET Core project in Azure DevOps YAML multi-stage pipelines

We have been looking migrating some of our common .NET Core libraries into new NuGet packages and have taken the chance to change our build process to use Azure DevOps Multi-stage Pipelines. Whilst doing this I hit a problem getting SonarQube analysis working, the documentation I found was a little confusing.

The Problem

As part of the YAML pipeline re-design we were moving away from building Visual Studio SLN solution files, and swapping to .NET Core command line for the build and testing of .CSproj files. Historically we had used the SonarQube Build Tasks that can be found in the Azure DevOps Marketplace to control SonarQube Analysis. However, if we used these tasks in the new YAML pipeline we quickly found that the SonarQube analysis failed saying it could find no projects

##[error]No analysable projects were found. SonarQube analysis will not be performed. Check the build summary report for details.

So I next swapped to using use the SonarScanner for .NET Core, assuming the issue was down to not using .NET Core commands. This gave YAML as follows,

- task: DotNetCoreCLI@2
     displayName: 'Install Sonarscanner'
     inputs:
       command: 'custom'

custom: ‘tool’

arguments: ‘install –global dotnet-sonarscanner –version 4.9.0

– task: DotNetCoreCLI@2
displayName: ‘Begin Sonarscanner’
inputs:
command: ‘custom’
custom: ‘sonarscanner’

arguments: ‘begin /key:”$(SonarQubeProjectKey)” /name:”$(SonarQubeName)” /d:sonar.host.url=”$(SonarQubeUrl)” /d:sonar.login=”$(SonarQubeProjectAPIKey)” /version:$(Major).$(Minor)’

…. Build and test the project

– task: DotNetCoreCLI@2
displayName: ‘En Sonarscanner’
inputs:
command: ‘custom’
custom: ‘sonarscanner’

arguments: ‘end /key:”$(SonarQubeProjectKey)” ‘

However, this gave exactly the same problem.

The Solution

The solution it turns out was nothing to do with using the either of the ways to trigger SonarQube analysis, it was down to the fact that the .NET Core .CSProj file did not have a unique GUID. Historically this had not been an issue as if you trigger SonarQube analysis via a Visual Studio solution GUIDs are automatically injected. The move to building using the .NET core command line was the problem, but the fix was simple, just add a unique GUID to each CS project file.

<Project Sdk="MSBuild.Sdk.Extras">
  <PropertyGroup>
     <TargetFramework>netstandard2.0</TargetFramework>
     <PublishRepositoryUrl>true</PublishRepositoryUrl>
     <EmbedUntrackedSources>true</EmbedUntrackedSources>
     <ProjectGuid>e2bb4d3a-879c-4472-8ddc-94b2705abcde</ProjectGuid>

Once this was done, either way of running SonarQube worked.

After a bit of thought, I decided to stay with the same tasks I have used historically to trigger analysis. This was for a few reasons

  • I can use a central Service Connector to manage credentials to access SonarQube
  • The tasks manage the installation and update of the SonarQube tools on the agent
  • I need to pass less parameters about due to the use of the service connector
  • I can more easily include the SonarQube analysis report in the build

So my YAML now looks like this

- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B7A063157.SonarQubePrepare@4
  displayName: 'Prepare analysis on SonarQube'
  inputs:
    SonarQube: Sonarqube
    projectKey: '$(sonarqubeKey)'
    projectName: '$(sonarqubeName)'
    projectVersion: '$(Major).$(Minor)'
    extraProperties: |
          # Additional properties that will be passed to the scanner, 
          # Put one key=value per line, example:
          # sonar.exclusions=**/*.bin
          sonar.dependencyCheck.reportPath=$(Build.SourcesDirectory)/dependency-check-report.xml     
       sonar.dependencyCheck.htmlReportPath=$(Build.SourcesDirectory)/dependency-check-report.html
          sonar.cpd.exclusions=**/AssemblyInfo.cs,**/*.g.cs
             sonar.cs.vscoveragexml.reportsPaths=$(System.DefaultWorkingDirectory)/**/*.coveragexml
         sonar.cs.vstest.reportsPaths=$(System.DefaultWorkingDirectory)/**/*.trx

… Build & Test with DotNetCoreCLI

- task: SonarSource.sonarqube.6D01813A-9589-4B15-8491-8164AEB38055.SonarQubeAnalyze@4
  displayName: 'Run Code Analysis'

- task: SonarSource.sonarqube.291ed61f-1ee4-45d3-b1b0-bf822d9095ef.SonarQubePublish@4
  displayName: 'Publish Quality Gate Result'

Addendum..

Even though I don’t use it in the YAML, I still found a use for the .NET Core SonarScanner commands.

We use the Developer edition of SonarQube, this understands Git branches and PR. This edition has a requirement that you must perform an analysis on the master branch before any analysis on other branches can be done. This is because the branch analysis is measured relative to the quality of the master branch. I have found the easiest way to establish this baseline, even if it is of an empty project, is to run SonarScanner from the command on my PC, just to setup the base for any PR to be measured against.

Announcing the deprecation of my Azure DevOps Pester Extension as it has been migrated to the Pester Project and republished under a new ID

Back in early 2016 I wrote an Azure DevOps Extension to wrapper Pester, the Powershell unit testing tool. Over the years I updated it, and then passed the support of it over to someone who knows much more about Powershell and Pester than I Chris Gardner who continued to develop it.

With the advent of cross-platform Powershell Core we realized that the current extension implementation had a fundamental limitation. Azure DevOps Tasks can only be executed by the agent using the Windows version of Powershell or Node. There is no option for execution by Powershell Core, and probably never will be. As Pester is now supported by Powershell Core this was a serious limitation.

To get around this problem I wrote a Node wrapper to allow the existing Powershell task to be executed using Node, by running a Node script then shelling out to Powershell or Powershell Core. A technique I have since used to make other extensions of mine cross-platform

Around this time we started to discuss whether my GitHub repo was really the best home for this Pester extension, and in the decided that this major update to provide cross-platform support was a good point to move it a new home under the ownership of Pester Project.

So, given all that history, I am really pleased to say that I am deprecating my Pester Extension and adding instructions that though my extension is not going away and will continue to work as it currently does, it will not be updated again and all users should consider swapping over to the new cross-platform version of the extension that is the next generation of same code base but now owned and maintained by the Pester project (well still Chris in reality).

Unfortunately, Azure DevOps provides no way to migrate ownership of an extension. So to swap to the new version will require some work. If you are using YAML the conversion is only a case of changing the task name/id. If you are using the UI based builds or release you need to add the new task and do some copy typing of parameters. The good news is that all the parameter options remain the same so it should be a quick job.

Also please note that any outstanding issues, not fixed in the new release, have been migrated over to the extensions now home, they have not been forgotten.

So hope you all like the new enhanced version of the Pester Extension and thanks to Chris for sorting the migration and all his work support it.