How I dealt with a strange problem with PSRepositories and dotnet NuGet sources

Background

We regularly re-build our Azure DevOps private agents using Packer and Lability, as I have posted about before.

Since the latest re-build, we have seen all sorts of problems. All related to pulling packages and tools from NuGet based repositories. Problems we have never seen with any previous generation of our agents.

The Issue

The issue turned out to be related to registering a private PowerShell repository.

$RegisterSplat = @{
Name = 'PrivateRepo'
SourceLocation = 'https://psgallery.mydomain.co.uk/nuget/PowerShell'
PublishLocation = 'https://psgallery.mydomain.co.uk/nuget/PowerShell'
InstallationPolicy = 'Trusted'
}

Register-PSRepository @RegisterSplat

Running this command caused the default dotnet NuGet repository to be unregistered i.e. the command dotnet nuget list source was expected to return

Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  nuget.org [Enabled]
      https://www.nuget.org/api/v2/
  3.  Microsoft Visual Studio Offline Packages [Enabled]
      C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

But it returned

Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  Microsoft Visual Studio Offline Packages [Enabled]
      C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

The Workaround

You can’t call this a solution, as I cannot see why it is really needed, but the following command does fix the problem

 dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org

Porting my Visual Studio Parameters.xml Generator tool to Visual Studio 2022 Preview

As I am sure you are all aware the preview of Visual Studio 2022 has just dropped, so it is time for me to update my Parameter.xml Generator Tool to support this new version of Visual Studio.

But what does my extension do?

As the Marketplace description says…

A tool to generate parameters.xml files for MSdeploy from the existing web.config file or from an app.config file for use with your own bespoke configuration transformation system.

Once the VSIX package is installed, to use right-click on a web.config, or app.config, file in Solution Explorer and the parameters.xml file will be generated using the current web.config entries from for both configuration/applicationSettings and configuration/AppSettings. The values attributes will contain TAG style entries suitable for replacement at deployment time.

If the parameters.xml already exists in the folder (even if it is not a file in the project) you will be prompted before it is overwritten.

Currently the version in the Marketplace of Parameter.xml Generator Tool supports Visual Studio 2015, 2017 & 2019

Adding Visual Studio 2022 Support

The process to add 2022 support is more complicated than adding past new versions, where all that was usually required was an update to the manifest. This is due to the move to 64Bit.

Luckily the process is fairly well documented, but of course I still had a few problems.

MSB4062: The “CompareBuildTaskVersion” task could not be loaded from the assembly

When I tried build the existing solution, without any changes, in Visual Studio 2022 I got the error

MSB4062: The “CompareBuildTaskVersion” task could not be loaded from the assembly D:\myproject\packages\Microsoft.VSSDK.BuildTools.15.8.3253\tools\VSSDK\Microsoft.VisualStudio.Sdk.BuildTasks.15.0.dll. Could not load file or assembly.

This was fixed by updating the package Microsoft.VSSDK.BuildTools from 15.1.192 to 16.9.1050.

Modernizing the Existing VSIX project

I did not modernize the existing VSIX project before I started the migration. When I clicked the Migrate packages.config to PackageReference…. it said my project was not a suitable version. So I just moved to the next step.

Adding Link Files

After creating the shared code project, that contains the bulk of the files, I needed to add links to some of the resources i.e. the license file, the package icon and .VSCT file.

When I tried add the link, I got an error in the form

 Cannot add another link for the same file in another project

I tried exiting Visual Studio, cleaning the solution, nothing helped. The solution was to edit the .CSPROJ file manually in a text editor e.g.

 <ItemGroup>
    <Content Include="Resources\License.txt">
      <CopyToOutputDirectory>Always</CopyToOutputDirectory>
    <Content Include="..\ParametersXmlAddinShared\Resources\Package.ico">
      <Link>Package.ico</Link>
      <IncludeInVSIX>true</IncludeInVSIX>
    </Content>
    <Content Include="Resources\Package.ico">
      <CopyToOutputDirectory>Always</CopyToOutputDirectory>
    <Content Include="..\ParametersXmlAddinShared\Resources\License.txt">
      <Link>License.txt</Link>
      <IncludeInVSIX>true</IncludeInVSIX>
    </Content>
    <EmbeddedResource Include="Resources\ParametersUppercaseTransform.xslt" />
    <VSCTCompile Include="..\ParametersXmlAddinShared\ParametersXmlAddin.vsct">
      <Link>ParametersXmlAddin.vsct</Link>
      <ResourceName>Menus.ctmenu</ResourceName>
    </VSCTCompile>
  </ItemGroup>

Publishing the new Extension

Once I had completed the migration steps, I had a pair of VSIX files. The previously existing one that supported Visual Studio 2015, 2017 & 2019 and the new Visual Studio 2022 version.

The migration notes say that in the future we will be able to upload both VSIX files to a single Marketplace entry and the Marketplace will sort out delivering the correct version.

Unfortunately, that feature is not available at present. So for now the new Visual Studio 2022 VSIX is published separately from the old one with a preview flag.

As soon as I can, I will merge the new VSIX into the old Marketpalce entry and removed the preview 2022 version of the VSIX

Automating the creation of Team Projects in Azure DevOps

Creating a new project in Azure DevOps with your desired process template is straightforward. However, it is only the start of the job for most administrators. They will commonly want to set up other configuration settings such as branch protection rules, default pipelines etc. before giving the team access to the project. All this administration can be very time consuming and of course prone to human error.

To make this process easier, quicker and more consistent I have developed a process to automated all of this work. It uses a mixture of the following:

A sample team project that contains a Git repo containing the base code I want in my new Team Project’s default Git repo. In my case this includes

  • An empty Azure Resource Management (ARM) template
  • A .NET Core Hello World console app with an associated .NET Core Unit Test project
  • A YAML pipeline to build and test the above items, as well as generating release notes into the Team Project WIKI

A PowerShell script that uses both az devops and the Azure DevOps REST API to

  • Create a new Team Project
  • Import the sample project Git repo into the new Team Project
  • Create a WIKI in the new Team Project
  • Add a SonarQube/SonarCloud Service Endpoint
  • Update the YAML file for the pipeline to point to the newly created project resources
  • Update the branch protection rules
  • Grant access privaledges as needed for service accounts

The script is far from perfect, it could do much more, but for me, it does the core requirements I need.

You could of course enhance it as required, removing features you don’t need and adding code to do jobs such as adding any standard Work Items you require at the start of a project. Or altering the contents of the sample repo to be cloned to better match your most common project needs.

You can find the PowerShell script in AzureDevOpsPowershell GitHub repo, hope you find it useful.

Getting the approver for release to an environment within an Azure DevOps Multi-Stage YAML pipeline

I recently had the need to get the email address of the approver of a deployment to an environment from within a multi-stage YAML pipeline. Turns out it was not as easy as I might have hoped given the available documented APIs.

Background

My YAML pipeline included a manual approval to allow deployment to a given environment. Within the stage protected by the approval, I needed the approver’s details, specifically their email address.

I managed to achieve this but had to use undocumented API calls. These were discovered by looking at Azure DevOps UI operations using development tools within my browser.

The Solution

The process was as follows

  • Make a call to the build’s timeline to get the current stage’s GUID – this is documented API call
  • Make a call to the Contribution/HierarchyQuery API to get the approver details. This is the undocumented API call.

The code to do this is as shown below. It makes use of predefined variables to pass in the details of the current run and stage.

Note that I had to re-create the web client object between each API call. If I did not do this I got a 400 Bad Request on the second API call – it took me ages to figure this out!

Loading drivers for cross-browser testing with Selenium

Another post so I don’t forget how I fixed a problem….

I have been making sure some Selenium UX tests that were originally written against Chrome also work with other browsers. I have had a few problems, the browser under test failing to load or Selenium not being able to find elements.

Turns out the solution is to just use the custom driver start-up options, the default constructors don’t seem to work for browsers other theran Chrome and Firefox.

Hence, I not have helper method that creates a driver from me based the a configuration parameter

  internal static IWebDriver GetWebDriver()
    {
        var driverName = GetWebConfigSetting("webdriver");
        switch (driverName)
        {
            case "Chrome":
                return new ChromeDriver();
            case "Firefox":
                return new FirefoxDriver();
            case "IE":
                InternetExplorerOptions caps = new InternetExplorerOptions();
                caps.IgnoreZoomLevel = true;
                caps.EnableNativeEvents = false;
                caps.IntroduceInstabilityByIgnoringProtectedModeSettings = true;
                caps.EnablePersistentHover = true;
                return new InternetExplorerDriver(caps);
            case "Edge-Chromium":
                var service = EdgeDriverService.CreateDefaultService(Directory.GetCurrentDirectory(), "msedgedriver.exe");
                return new EdgeDriver(service);
            default:
                throw new ConfigurationErrorsException($"{driverName} is not a known Selenium WebDriver");
        }
    }

A first look at the beta of GitHub Issue Forms

Update 10 May 2021 Remember that GitHub Issue Forms are in early beta, you need to keep an eye on the regular new releases as they come out. For example, my GitHub Forms stopped showing last week. This was due to me using now deprecate lines in the YAML definition files. Once I edited the files to update to support YAML they all leap back into life


GitHub Issues are core to tracking work in GitHub. Their flexibility is their biggest advantage and disadvantage. As a maintainer of a projects, I always need specific information when an issue is raised. Whether it be a bug, or feature request.

Historically, I have used Issue Templates, but these templates are not enforced. They add a suggestion for the issue text, but this can be ignored by the person raising the issue, and I can assure you they often do.

I have been lucky enough to have a look at GitHub Issue Forms, which is currently in early private beta. This new feature aims to address the problem by making the creation of issues form-based using YML templates.

I have swapped to using them on my most active repos Azure DevOps Pipeline extensions and GitHub Release Notes Action. My initial experience has been very good, the usual YML issue of incorrect indenting, but nothing more serious. They allow the easy creation of rich forms that are specific to the project.

They next step is to see if the quality of the logged issues improves.

Tidying up local branches with a Git Alias and a PowerShell Script

It is easy to get your local branches in Git out of sync with the upstream repository, leaving old dead branches locally that you can’t remember creating. You can use the prune option on your Git Fetch command to remove the remote branch references, but that command does nothing to remove local branches.

A good while ago, I wrote a small PowerShell script to wrapper the running of the Git Fetch and then based on the deletions remove any matching local branches. Then finally returning me to my trunk branch.

Note: This script was based on some sample I found, but I can’t remember where to give credit, sorry.

I used to just run this command from the command line, but I recently thought it would be easier if it became a Git Alias. As Git Aliases run a bash shell, this meant I needed to shell out to PowerShell 7. Hence, my Git Config ended up being as shown below

[user]
        name = Richard Fennell
        email = richard@blackmarble.co.uk
[filter "lfs"]
        required = true
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
[init]
        defaultBranch = main
[alias]
        tidy = !pwsh.exe C:/Users/fez/OneDrive/Tools/Remove-DeletedGitBranches.ps1 -force

I can just run 'git tidy‘ and all my branches get sorted out.

Added Manual Test Plan support to my Azure DevOps Cross-Platform Release notes Task

I have just release 3.46.4 of my Azure DevOps Cross-Platform release Notes task which adds support for generating release notes based on the results of Azure DevOps Test Plans.

There has been support in the task for automated tests, run as part of the build or release process, for a while. However, until this release, there was no way to generate release notes based on manual tests.

Manual Test results are now made available to the templating engine using two new objects:

  • manualtests – the array of manual Test Plan runs associated with any of the builds linked to the release. This includes sub-objects detailing each test.
    Note: Test Runs are also available under the builds array when the task is used in a release, for each build object there is a list of its manual tests as well commits and WI etc.
  • manualTestConfigurations – the array of manual test configurations test have been run against.

The second object, to store the test configurations, is required because the test results only contain the ID of the configuration used, not any useful detail such as a name or description. The extra object allows a lookup to be done if this information is required in the release notes e.g. if you have chosen to list out each test, and each test is run multiple times in a test run against different configurations e.g. UAt and Live

So you can now generate release notes with summaries of manual test runs

Using a template in this form

## Manual Test Plans
| Run ID | Name | State | Total Tests | Passed Tests |
| --- | --- | --- | --- | --- |
{{#forEach manualTests}}
| [{{this.id}}]({{this.webAccessUrl}}) | {{this.name}} | {{this.state}} | {{this.totalTests}} | {{this.passedTests}} |
{{/forEach}}

Or detailing out all the individual test

with a template like this

## Manual Test Plans with test details
{{#forEach manualTests}}
### [{{this.id}}]({{this.webAccessUrl}}) {{this.name}} - {{this.state}}
| Test | Outcome | Configuration |
| - | - | - |
{{#forEach this.TestResults}}
| {{this.testCaseTitle}} | {{this.outcome}} | {{#with (lookup_a_test_configuration ../../manualTestConfigurations this.configuration.id)}} {{this.name}} {{/with}} |
{{/forEach}}
{{/forEach}}

Fixing my SQLite Error 5: ‘database is locked’ error in Entity Framework

I have spent too long today trying to track down an intermittent “SQLite Error 5: ‘database is locked’” error in .Net Core Entity Framework.

I have read plenty of documentation and even tried swapping to use SQL Server, as opposed to SQLite, but this just resulted in the error ‘There is already an open DataReader associated with this Connection which must be closed first.’.

So everything pointed to it being a mistake I had made.

And it was, it turns out the issue was I had the dbContext.SaveChanges() call inside a foreach loop

It was

using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o => o.RequestedStartTime < DateTime.UtcNow && o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
        dbContext.SaveChanges();
    }
}

And it should have been

 using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o => o.RequestedStartTime < DateTime.UtcNow && o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
    }
    dbContext.SaveChanges();
}

Once this change was made my error disappeared.

What to do when moving your Azure DevOps organisation from one region to another is delayed.

There are good reasons why you might wish to move an existing Azure DevOps organisation from one region to another. The most common ones are probably:

  • A new Azure DevOps region has become available since you created your organisation that is a ‘better home’ for your projects.
  • New or changing national regulations require your source stored in a specific location.
  • You want your repositories as close to your workers as possible, to reduce network latency.

One of these reasons meant I recently had to move an Azure DevOps organisation, so followed the documented process. This requires you to

  1. Whilst logged in as the Azure DevOps organisation owner, open the Azure DevOps Virtual Support Agent
  2. Select the quick action ‘Change Organization Region’
  3. Follow the wizard to pick the new region and the date for the move.

You are warned that there could be a short loss of service during the move. Much of the move is done as a background process. It is only the final switch over that can interrupt service, hence this interruption being short.

I followed this process, but after the planned move date I found my organisation has not moved. In the Virtual Support Agent, I found the message.

Please note that region move requests are currently delayed due to ongoing deployments. We may not be able to perform the change at your requested time and may ask you to reschedule. We apologize for the potential delay and appreciate your patience!

I received no other emails, I suspect overly aggressive spam filters were the cause of that, but it meant I was unclear what to do next. Should I:

  1. Just wait i.e. do not reschedule anything, even though the target date is now in the past
  2. Reschedule the existing move request to a date in the future using the virtual assistant wizard
  3. Cancel the old request and start the process again from scratch

After asking the question in the Visual Studio Developer Community Forums I was told the correct action is to cancel the old request and request a new move date. It seems that once your requested date is passed the move will not take place no matter how long you wait.

Hence, I created a new request, which all went through exactly as planned.