A fix for Error: SignerSign() failed." (-2146958839/0x80080209) with SignTool.exe

I have spent too long recently trying to sign a UWP .MSIXBUNDLE generated from an Azure DevOps build using the SignTool.exe and our code signing certificate. I kept getting the error

Done Adding Additional Store
Error information: "Error: SignerSign() failed." (-2146958839/0x80080209)

From past experience, SignTool errors are usually due to the publisher details in the XML manifest files (in this case unpack the bundle with MakeAppx.exe and look in AppxMetadataAppxBundleManifest.xml, and also check the manifest in the bundled .MSIX files) does not match the subject details for the PFX file being used for signing. 

Or so I thought…..

Turns out you can get this error too if you use the wrong version of the SignTool, but it give no clue to this fact.

So the top tip is …

Make sure you use the SignTool.exe from the same folder as the MakeAppx.exe tool. In  my case in “C:Program Files (x86)Windows Kits10bin10.0.17763.0×64”

Once I did this, after of course updating all the manifest files with the correct publisher details, I was able to sign my bundle as I wanted.

Migrating a GUI based build to YAML in Azure DevOps Pipelines

Introduction

I use Azure DevOps Pipelines for the build and release of my Azure DevOps Pipeline extensions, I previously detailed my process here .

For a good few months now YAML builds have been available. These provide the key advantage that the build is defined in a YAML text file that is stored with your product’s source code, thus allowing you to more easily track build changes. Also bulk editing becomes easier as a simple text editor can be used.

I have been putting off moving my current GUI based builds for as there is a bit of work, this post document then step.

Process

Getting the old build content

First I created a new branch in my local copy of my GitHub repo that stores the source for my extensions

I then created an empty file azure-pipelines-build.yaml the same folder as the root of the extension I was replacing the build for. I created the empty text file. I did this as the current create new build UI allows you to pick a file or a create one, but if you create one it gives you no control as to where or how it is named

In you existing build I then clicked the pipeline level ‘View YAML’

image 

Note:  Initially I found this link disabled, but if you click around the UI, into the task details, variables etc, it eventually becomes enabled. I have no idea why.

Copy this YAML into you newly created azure-pipelines-build.yaml file, committed the file and pushed it GitHub as the new branch.

Creating the YAML build

I then created a new YAML based build, picking in my case GitHub as the source host, the correct branch, and correct file.

This YAML contains the  core of what is needed, but the build was missing some some items such as triggers, build number and variable.

I added

  • the name (build number)
  • the PR triggers to the YAML

to the .YAML file, but decided to declare my variables as they contained secrets within the build definition in Azure DevOps.

The final YAML file was can be viewed here

What I fixed in passing

In the past I used to package up my extensions twice, once packaged as private (for testing) and once as public. This was due to the limitation of the Azure DevOps Marketplace and the release tasks I was using at the time. Whilst passing a took the chance to change to only building the public VSIX package, but updated my release pipeline process to dynamically inject the settings for private testing. This was done using the newer Azure DevOps Extensions Tasks.

As I side note I had to upgrade to these newer release tasks anyway as the older ones had ceased to work due to using old API calls

Swapping in the new build into the release process

To replace the old GUI build with the new YAML build I did the following

  • Renamed my old GUI build and disabled this (the disable is vital else it continues to be triggered by the GitHub PRs, even if the triggers are removed in the build)
  • Renamed my new YAML build to the old GUI build name (not vital, but it felt neater)
  • Updated my release pipeline to pick the new YAML build as opposed to the old GUI build. Even though the names were the same, their internal IDs are not, so this needs to be swapped. I made sure my ‘source alias’ did not change, so I did not have to make other changes to my release pipeline. 

Once this was done I triggered a new GitHub PR and everything worked as expects.

What Next

I have kept the old build about just in case there is a problem I have not spotted, but I intend to delete this soon.

I now need to make the same changes for all my other build. The only difference for from this process will be for builds that make use of Task Groups, such as all those for Node based extensions. Task Groups cannot be exported as YAML at this time, so I will have to manually rebuilding these steps in a text editor. So more prone to human error, but I think it needs to be done.

So a nice back burner project. I will probably update them as release new versions of extensions.

SCCM OSD on Surface Pro 6

Today we attempted to re-image Rik’s new Surface Pro 6 using the usual set of task sequences that we have configured for all of the PCs that are in use, and hit an issue.

The task sequence failed (very quickly) with error 0x80070490. Looking at what was going on onscreen, it was obvious that the partitioning of the disk within the device was failing.

Initially I assumed that it was driver related and so pulled down the Surface Pro 6 driver pack from Microsoft, added it to SCCM and updated the boot media to include appropriate drivers. This didn’t solve the issue however.

Looking at the disk configuration, it became apparent that the disk number associated with the SSD within the Pro 6 was not the ‘0’ that I expected, but ‘2’ instead! It appears, following some reading that the ‘disk’ in this device, which is a 1TB drive, is actually two SSDs configured as a RAID 0 set, hence the disk number being ‘2’.

Copying the task sequence that Rik wanted to use to deploy the OS and software to the device allowed us to modify the disk number that would be used to ‘2’, which allowed the task sequence to complete successfully.

We have a couple of options available to us for deployment of these task sequences in the future:

  • Create an additional device collection and populate with the Surface Pro 6 devices to target the modified task sequence and keep a separate task sequence for deploying the OS to these devices, or
  • Use some conditional queries to determine whether we’re dealing with a Surface Pro which has two disks configured as RAID 0 and hence has a disk ‘2’.

The latter is the more elegant method and means that I won’t need to keep even more task sequences around.

To implement this, we can utilise a couple of WMI queries to determine whether we’re dealing with one of these devices:

SELECT * FROM Win32_ComputerSystemProduct WHERE name = “Surface Pro”

SELECT * FROM Win32_DiskDrive WHERE Index = 2 AND InterfaceType = “SCSI”

Both are in the standard rootcimv2 namespace.

Within the task sequence, the default UEFI partitioning step should target disk 0 and the options should look like this:

Surface Pro 6 disk configuration detection

The Surface Pro 6 1TB UEFI partitioning step should target disk 2 and the conditions should have ‘all’ rather than ‘none’ in the IF statement.

Configuring PowerChute Network Shutdown on Server Core

Everyone installing Hyper-V servers is installing them as Server Core servers, right? Smile

I recently hit an issue configuring APC’s PowerChute Network Shutdown (PCNS) software on a Server Core installation of Windows Server 1809 (the most recent release of the semi-annual channel) whereby while the installation appeared to complete successfully, I could not communicate with the service to configure it post-installation.

After a little digging, it turned out that the installer had created the firewall rule exemptions for to wrong profile (i.e. public rather than domain). The solution was to run the following PowerShell to update the profile for the PCNS firewall rules to match the network profile the server was operating on:

Get-NetFirewallRule | where {$_.DisplayName -like “PCNS*”} | Set-NetFirewallRule -Profile Domain

Once the firewall rules were updated, communication was restored and configuration could be completed from a browser running on another machine.

Windows Admin Center Updating Automatically

It was great to see that the Windows Admin Center is now being updated automatically with other Windows Server OS updates when I updated the server on which it is installed the other day:

Windows Admin Center Update in Updates List

One fewer thing to have to check for manually!

DDD North is Go

I am super happy to announce that DDD North 2019 is Go.

Apologies for the slight false start but DDD North 2019 will be hosted in the fair city of Hull at the University of Hull on the 2nd of March

Submissions for sessions is open now on the DDDNorth site http://www.dddnorth.co.uk/

Please submit a session and build an amazing developer day

b


A task for documenting your Azure DevOps Pipeline extensions for YAML usage

I have posted in the past a quick script to generate some markdown documentation for the YAML usage of Azure DevOps Pipeline extensions. Well I decided that having this script as a task itself would be a good idea, so a wrote it, and please to say have just release it to the marketplace

The YAML Documenter task scans an extension’s vss-extension.json and task.json files to find the details it needs to build the markdown documentation on the YAML usage. It can also, optionally, copy the extension’s readme.md as the extensions primary documentation.

I am starting to use this extension, with my WIKIUpdater extension, in my release pipelines to make sure my extension’s GitHub WIki is up to date.

image

It is going to take a bit of work to update all my pipelines, but the eventual plan is to use the YAML document generator in the builds, adding the readme and YAML markdown files to the build as artefacts. Then deploying these files to the wiki in a later stage of the pipeline.

Hope some of you find it of use.

Programmatically adding User Capabilities to Azure DevOps Agents

I am automating the process by which we keep our build agent up to date. The basic process is to use a fork of the standard Microsoft Azure DevOps Pipeline agent that has the additional code included we need, notably Biztalk.

Once I have the Packer created VM up and running, I need to install the agent. This is well document, just run .config.cmd –help for details. However, there is no option to add user capabilities to the agent.

I know I could set them via environment variables, but I don’t want the same user capabilities on each agent on a VM (we use multiple agents on a single VM).

There was no documented Azure DevOps API I could find to add capabilities, but a bit of hacking around with Chrome Dev tools and Postman got me a solution, which I have provided a GIST

Azure Pipeline YAML support on VSCode

A major problem when moving from the graphic editing of Azure Pipeline builds to YAML has been the difficulty in knowing the options available, and of course making typos.

Microsoft have just released a VSCode extension to help address this problem – it is called Azure Pipelines

I have yet to give it a really good workout, but first impressions are good.

It does not remove the need for good documentation of task options, there is a need for my script to generate YAML documentation from a task.json file, but anything extra to ease editing helps.

Logic App Flat File Schemas and BizTalk Flat File Schemas

I recently started working on a new Logic App for a customer using the Enterprise Integration Pack for Visual Studio 2015, I was greeted with a familiar sight, the schema generation tools from BizTalk, but with a new lick of paint Open-mouthed smile

The Logic App requires the use of Flat File schemas, so I knocked up a schema from the  instance I’d been provided and went to validate it against the instance used to generate it (since it should validate).

My Flat File was a bit of a pain to be frank in that it had ragged endings, that is to say some sample rows might look a bit like:

1,2,3,4,5,6

1,2,3,4,5

1,2,3,4,5,6

Which I’ve worked with before….but couldn’t quite remember how I solved exactly other than tinkering with the element properties.

I generated the schema against the lines with the additional column and erroneously set the last field property Nillable to True. When I went to validate the instance lo’ and behold it wasn’t a valid instance and I had little information about why.

So I fired up my BizTalk 2013 R2 virtual machine (I could have used my 2016 one to be fair if I hadn’t sent it to the farm with old yellow last week) and rinsed and repeated the Flat File Schema Wizard.

So I got a bit more information this time, namely that the sample I’d been provided was missing a CR/LF on the final line and that the Nillable I’d set on the last column was throwing a wobbler by messing up the following lines.

Setting the field’s  Nillable property back to false, but it’s Min and Max occurs to 0 and 1 respectively and I had a valid working schema.

So I copied the schema back to my Logic Apps VM and attempted to revalidate my file (with its final line CR/LF amended). To my annoyance, invalid instance!

I was boggled at this quite frankly but some poking around the internet led me to this fellow’s Blog.

https://blogs.msdn.microsoft.com/david_burgs_blog/2018/03/26/generate-and-validate-flat-file-native-instances-from-flat-file-schemas/

In short there’s an attribute added on a Flat File schema which denotes the extension class to be used by the schema editor, when built by a BizTalk Flat File Schema Wizard it’s set to

Microsoft.BizTalk.FlatFileExtension.FlatFileExtension

When generated by the Enterprise Integration Pack it’s

Microsoft.Azure.Integration.DesignTools.FlatFileExtension.FlatFileExtension

Changing this attribute value in my BizTalk generated Flat File schema and presto, the schema could validate the instance it was generated from.

So in short I’ll say the flavour of the Schema designer tools in the Enterprise Integration Pack seem to throw out errors and verbosity a little different to its BizTalk ancestor, it still throws out mostly the same information, but in different places:

  • EIP

You only get a generic error message in the output log. Go and examine the errors log for more information.

image

  • BizTalk

You get the errors in the output log, and in the error log.

image

In my case the combination of my two errors (the flat file being malformed slightly, and my Nillable field change) in the EIP only gave me an error “Root element is missing” which wasn’t particularly helpful and the BizTalk tooling did give me a better fault diagnosis.

On the bright side the two are more or less interchangeable. Something to bear in mind if you’re struggling with a Flat Schema and have a BizTalk development environment on hand.