GitHub Issues are core to tracking work in GitHub. Their flexibility is their biggest advantage and disadvantage. As a maintainer of a projects, I always need specific information when an issue is raised. Whether it be a bug, or feature request.
Historically, I have used Issue Templates, but these templates are not enforced. They add a suggestion for the issue text, but this can be ignored by the person raising the issue, and I can assure you they often do.
I have been lucky enough to have a look at GitHub Issue Forms, which is currently in early private beta. This new feature aims to address the problem by making the creation of issues form-based using YML templates.
I have swapped to using them on my most active repos Azure DevOps Pipeline extensions and GitHub Release Notes Action. My initial experience has been very good, the usual YML issue of incorrect indenting, but nothing more serious. They allow the easy creation of rich forms that are specific to the project.
They next step is to see if the quality of the logged issues improves.
It is easy to get your local branches in Git out of sync with the upstream repository, leaving old dead branches locally that you can’t remember creating. You can use the prune option on your Git Fetch command to remove the remote branch references, but that command does nothing to remove local branches.
A good while ago, I wrote a small PowerShell script to wrapper the running of the Git Fetch and then based on the deletions remove any matching local branches. Then finally returning me to my trunk branch.
Note: This script was based on some sample I found, but I can’t remember where to give credit, sorry.
I used to just run this command from the command line, but I recently thought it would be easier if it became a Git Alias. As Git Aliases run a bash shell, this meant I needed to shell out to PowerShell 7. Hence, my Git Config ended up being as shown below
[user]
name = Richard Fennell
email = richard@blackmarble.co.uk
[filter "lfs"]
required = true
clean = git-lfs clean -- %f
smudge = git-lfs smudge -- %f
process = git-lfs filter-process
[init]
defaultBranch = main
[alias]
tidy = !pwsh.exe C:/Users/fez/OneDrive/Tools/Remove-DeletedGitBranches.ps1 -force
I can just run 'git tidy‘ and all my branches get sorted out.
There has been support in the task for automated tests, run as part of the build or release process, for a while. However, until this release, there was no way to generate release notes based on manual tests.
Manual Test results are now made available to the templating engine using two new objects:
manualtests – the array of manual Test Plan runs associated with any of the builds linked to the release. This includes sub-objects detailing each test. Note: Test Runs are also available under the builds array when the task is used in a release, for each build object there is a list of its manual tests as well commits and WI etc.
manualTestConfigurations – the array of manual test configurations test have been run against.
The second object, to store the test configurations, is required because the test results only contain the ID of the configuration used, not any useful detail such as a name or description. The extra object allows a lookup to be done if this information is required in the release notes e.g. if you have chosen to list out each test, and each test is run multiple times in a test run against different configurations e.g. UAt and Live
So you can now generate release notes with summaries of manual test runs
Using a template in this form
## Manual Test Plans
| Run ID | Name | State | Total Tests | Passed Tests |
| --- | --- | --- | --- | --- |
{{#forEach manualTests}}
| [{{this.id}}]({{this.webAccessUrl}}) | {{this.name}} | {{this.state}} | {{this.totalTests}} | {{this.passedTests}} |
{{/forEach}}
I have spent too long today trying to track down an intermittent “SQLite Error 5: ‘database is locked’” error in .Net Core Entity Framework.
I have read plenty of documentation and even tried swapping to use SQL Server, as opposed to SQLite, but this just resulted in the error ‘There is already an open DataReader associated with this Connection which must be closed first.’.
So everything pointed to it being a mistake I had made.
And it was, it turns out the issue was I had the dbContext.SaveChanges() call inside a foreach loop
SPWakeUp version 1.3.0 has been released. This version includes the ability to run on a non-SharePoint server to wake arbitrary URLs.
Use the ‘-NotASharePointServer’ command line parameter to specify that SPWakeUp should not check for the presence of SharePoint and should not attempt to evaluate a list of web applications, site collections and sub-sites.
Note: You MUST include one or both of the ‘-Include’ and ‘-IncludeFile’ command line parameters when using ‘-NotASharePointServer’ to specify a list of URLs to be woken. Failure to do so will result in SPWakeUp not attempting to wake any URLs.
Select the quick action ‘Change Organization Region’
Follow the wizard to pick the new region and the date for the move.
You are warned that there could be a short loss of service during the move. Much of the move is done as a background process. It is only the final switch over that can interrupt service, hence this interruption being short.
I followed this process, but after the planned move date I found my organisation has not moved. In the Virtual Support Agent, I found the message.
Please note that region move requests are currently delayed due to ongoing deployments. We may not be able to perform the change at your requested time and may ask you to reschedule. We apologize for the potential delay and appreciate your patience!
I received no other emails, I suspect overly aggressive spam filters were the cause of that, but it meant I was unclear what to do next. Should I:
Just wait i.e. do not reschedule anything, even though the target date is now in the past
Reschedule the existing move request to a date in the future using the virtual assistant wizard
Cancel the old request and start the process again from scratch
After asking the question in the Visual Studio Developer Community Forums I was told the correct action is to cancel the old request and request a new move date. It seems that once your requested date is passed the move will not take place no matter how long you wait.
Hence, I created a new request, which all went through exactly as planned.
One of my most popular Azure DevOps Extensions is my Release Notes Pipeline task. This allows the creation of release notes using information obtained from the Azure DevOps API and formatted using a Handlebars Template.
Given the popularity of GitHub Actions, I got to wondering whether porting this extension was viable?
The basic concept of this new action is the same as for the older task, get information on the pipeline/workflow run using the API and then format it using a Handlebars template. However, the information that can be returned is different. But this stands to reason as GitHub is not Azure DevOps. This is especially true when you consider the differences between the simplicity of GitHub Issues and the complexity, and variability of format, of Azure DevOps Work Items
The new action is focused on the workflow run it is called from. It make an API call to get the details of the run. This contains a lot of information about the run and URL links to associated items. Using these link, the associated information is retrieved using the API and the results added to the objects available in the Handlebars template. In this initial version of the Action the objects tree available in a template includes:
runDetails – the details of the current workflow run
pull_requests – the array of pull requests associated with the run
commits – the array of commits associated with the PR
comments – the array of comment associated with the PR
linkedIssues – the array of linked issues with the PR
As with my Azure DevOps Extension I have made use of the extensibility of Handlebars. My new action includes all the Handlebar Helpers, plus some action specific helpers I have written and you have the ability to add your own custom handlebar helpers if needed.
So I hope people find this new Action useful, I guess only time will tell
Organisations are facing increased, and unprecedented, pressure from the market to transform digitally, and as a result, they need to think how they can become more attractive in their specific market space. The most common area to concentrate on is becoming more efficient, and this can be thought of in discrete parts: streamlining internal, often complex, business processes, making the organisation easier to work with for suppliers or customers, having better cross business views on information, improving service delivery, or saving on manual processing. All these efficiencies are brought to bear by automation, whether this is person to system data sharing, system to system, business to business; the same principles apply but different techniques may be used.
Today most organisations run their business on systems that run across multiple heterogeneous environments. This poses interoperability issues and other such difficulties. To achieve this, businesses need to look to integrating the data solutions to provide easy access to standard data across internal and external systems, layering on top of business process automation.
Business process automation is now an entity will define business processes, and automate them, so they can be run repeatedly and reliably, at a low cost, with a reduction in human error.
I was reminded recently of the hoops you have to jump through to run UWP unit tests within an Azure DevOps automated build.
The key steps you need to remember are as follows
Desktop Interaction
The build agent should not be running as a service it must be able to interact with the desktop.
If you did not set this mode during configuration this post from Donovan Brown shows how to swap the agent over without a complete reconfiguration.
Test Assemblies
The UWP unit test projects are not built as a DLL, but as an EXE.
I stupidly just made my VSTest task look for the generated EXE and run the tests they contained. This does not work generating the somewhat confusing error
Test run will use DLL(s) built for framework .NETFramework,Version=v4.0 and platform X86. Following DLL(s) do not match framework/platform settings. BlackMarble.Spectrum.FridgeManagement.Client.OneWire.UnitTests.exe is built for Framework .NETCore,Version=v5.0 and Platform X86.
What you should search for as the entry point for the tests is the .appxrecipe file. Once tI used this my tests ran.
So my pipeline YML to run all the tests in a built solutions was