SCCM OSD on Surface Pro 6

Today we attempted to re-image Rik’s new Surface Pro 6 using the usual set of task sequences that we have configured for all of the PCs that are in use, and hit an issue.

The task sequence failed (very quickly) with error 0x80070490. Looking at what was going on onscreen, it was obvious that the partitioning of the disk within the device was failing.

Initially I assumed that it was driver related and so pulled down the Surface Pro 6 driver pack from Microsoft, added it to SCCM and updated the boot media to include appropriate drivers. This didn’t solve the issue however.

Looking at the disk configuration, it became apparent that the disk number associated with the SSD within the Pro 6 was not the ‘0’ that I expected, but ‘2’ instead! It appears, following some reading that the ‘disk’ in this device, which is a 1TB drive, is actually two SSDs configured as a RAID 0 set, hence the disk number being ‘2’.

Copying the task sequence that Rik wanted to use to deploy the OS and software to the device allowed us to modify the disk number that would be used to ‘2’, which allowed the task sequence to complete successfully.

We have a couple of options available to us for deployment of these task sequences in the future:

  • Create an additional device collection and populate with the Surface Pro 6 devices to target the modified task sequence and keep a separate task sequence for deploying the OS to these devices, or
  • Use some conditional queries to determine whether we’re dealing with a Surface Pro which has two disks configured as RAID 0 and hence has a disk ‘2’.

The latter is the more elegant method and means that I won’t need to keep even more task sequences around.

To implement this, we can utilise a couple of WMI queries to determine whether we’re dealing with one of these devices:

SELECT * FROM Win32_ComputerSystemProduct WHERE name = “Surface Pro”

SELECT * FROM Win32_DiskDrive WHERE Index = 2 AND InterfaceType = “SCSI”

Both are in the standard rootcimv2 namespace.

Within the task sequence, the default UEFI partitioning step should target disk 0 and the options should look like this:

Surface Pro 6 disk configuration detection

The Surface Pro 6 1TB UEFI partitioning step should target disk 2 and the conditions should have ‘all’ rather than ‘none’ in the IF statement.

Configuring PowerChute Network Shutdown on Server Core

Everyone installing Hyper-V servers is installing them as Server Core servers, right? Smile

I recently hit an issue configuring APC’s PowerChute Network Shutdown (PCNS) software on a Server Core installation of Windows Server 1809 (the most recent release of the semi-annual channel) whereby while the installation appeared to complete successfully, I could not communicate with the service to configure it post-installation.

After a little digging, it turned out that the installer had created the firewall rule exemptions for to wrong profile (i.e. public rather than domain). The solution was to run the following PowerShell to update the profile for the PCNS firewall rules to match the network profile the server was operating on:

Get-NetFirewallRule | where {$_.DisplayName -like “PCNS*”} | Set-NetFirewallRule -Profile Domain

Once the firewall rules were updated, communication was restored and configuration could be completed from a browser running on another machine.

Windows Admin Center Updating Automatically

It was great to see that the Windows Admin Center is now being updated automatically with other Windows Server OS updates when I updated the server on which it is installed the other day:

Windows Admin Center Update in Updates List

One fewer thing to have to check for manually!

DDD North is Go

I am super happy to announce that DDD North 2019 is Go.

Apologies for the slight false start but DDD North 2019 will be hosted in the fair city of Hull at the University of Hull on the 2nd of March

Submissions for sessions is open now on the DDDNorth site http://www.dddnorth.co.uk/

Please submit a session and build an amazing developer day

b


A task for documenting your Azure DevOps Pipeline extensions for YAML usage

I have posted in the past a quick script to generate some markdown documentation for the YAML usage of Azure DevOps Pipeline extensions. Well I decided that having this script as a task itself would be a good idea, so a wrote it, and please to say have just release it to the marketplace

The YAML Documenter task scans an extension’s vss-extension.json and task.json files to find the details it needs to build the markdown documentation on the YAML usage. It can also, optionally, copy the extension’s readme.md as the extensions primary documentation.

I am starting to use this extension, with my WIKIUpdater extension, in my release pipelines to make sure my extension’s GitHub WIki is up to date.

image

It is going to take a bit of work to update all my pipelines, but the eventual plan is to use the YAML document generator in the builds, adding the readme and YAML markdown files to the build as artefacts. Then deploying these files to the wiki in a later stage of the pipeline.

Hope some of you find it of use.

Programmatically adding User Capabilities to Azure DevOps Agents

I am automating the process by which we keep our build agent up to date. The basic process is to use a fork of the standard Microsoft Azure DevOps Pipeline agent that has the additional code included we need, notably Biztalk.

Once I have the Packer created VM up and running, I need to install the agent. This is well document, just run .config.cmd –help for details. However, there is no option to add user capabilities to the agent.

I know I could set them via environment variables, but I don’t want the same user capabilities on each agent on a VM (we use multiple agents on a single VM).

There was no documented Azure DevOps API I could find to add capabilities, but a bit of hacking around with Chrome Dev tools and Postman got me a solution, which I have provided a GIST

Azure Pipeline YAML support on VSCode

A major problem when moving from the graphic editing of Azure Pipeline builds to YAML has been the difficulty in knowing the options available, and of course making typos.

Microsoft have just released a VSCode extension to help address this problem – it is called Azure Pipelines

I have yet to give it a really good workout, but first impressions are good.

It does not remove the need for good documentation of task options, there is a need for my script to generate YAML documentation from a task.json file, but anything extra to ease editing helps.

Logic App Flat File Schemas and BizTalk Flat File Schemas

I recently started working on a new Logic App for a customer using the Enterprise Integration Pack for Visual Studio 2015, I was greeted with a familiar sight, the schema generation tools from BizTalk, but with a new lick of paint Open-mouthed smile

The Logic App requires the use of Flat File schemas, so I knocked up a schema from the  instance I’d been provided and went to validate it against the instance used to generate it (since it should validate).

My Flat File was a bit of a pain to be frank in that it had ragged endings, that is to say some sample rows might look a bit like:

1,2,3,4,5,6

1,2,3,4,5

1,2,3,4,5,6

Which I’ve worked with before….but couldn’t quite remember how I solved exactly other than tinkering with the element properties.

I generated the schema against the lines with the additional column and erroneously set the last field property Nillable to True. When I went to validate the instance lo’ and behold it wasn’t a valid instance and I had little information about why.

So I fired up my BizTalk 2013 R2 virtual machine (I could have used my 2016 one to be fair if I hadn’t sent it to the farm with old yellow last week) and rinsed and repeated the Flat File Schema Wizard.

So I got a bit more information this time, namely that the sample I’d been provided was missing a CR/LF on the final line and that the Nillable I’d set on the last column was throwing a wobbler by messing up the following lines.

Setting the field’s  Nillable property back to false, but it’s Min and Max occurs to 0 and 1 respectively and I had a valid working schema.

So I copied the schema back to my Logic Apps VM and attempted to revalidate my file (with its final line CR/LF amended). To my annoyance, invalid instance!

I was boggled at this quite frankly but some poking around the internet led me to this fellow’s Blog.

https://blogs.msdn.microsoft.com/david_burgs_blog/2018/03/26/generate-and-validate-flat-file-native-instances-from-flat-file-schemas/

In short there’s an attribute added on a Flat File schema which denotes the extension class to be used by the schema editor, when built by a BizTalk Flat File Schema Wizard it’s set to

Microsoft.BizTalk.FlatFileExtension.FlatFileExtension

When generated by the Enterprise Integration Pack it’s

Microsoft.Azure.Integration.DesignTools.FlatFileExtension.FlatFileExtension

Changing this attribute value in my BizTalk generated Flat File schema and presto, the schema could validate the instance it was generated from.

So in short I’ll say the flavour of the Schema designer tools in the Enterprise Integration Pack seem to throw out errors and verbosity a little different to its BizTalk ancestor, it still throws out mostly the same information, but in different places:

  • EIP

You only get a generic error message in the output log. Go and examine the errors log for more information.

image

  • BizTalk

You get the errors in the output log, and in the error log.

image

In my case the combination of my two errors (the flat file being malformed slightly, and my Nillable field change) in the EIP only gave me an error “Root element is missing” which wasn’t particularly helpful and the BizTalk tooling did give me a better fault diagnosis.

On the bright side the two are more or less interchangeable. Something to bear in mind if you’re struggling with a Flat Schema and have a BizTalk development environment on hand.

Keeping Azure DevOps organisations inherited process templates in sync

The problem

If you are like me for historic reasons you have multiple Azure DevOps organisations (instances) backed by the same Azure Active Directory (AAD). In my case for example: one was created when Azure DevOps was first released as TFSPreview.com and another is from our migration from on-prem TFS using the DB Migration Tools method; and I have others.

I make active use of all of these for different purposes, though one is primary with the majority of work done on it, and so I want to make sure the inherited process templates are the same on each of them. Using the primary organisation as the master customisation.

Note I have already converted all my old on-premises XML process models to inherited process templates.

There is no out the box way to do keep processes in syncs, but it is possible using a few tools. The main one is the Microsoft Process Migrator for Node on GitHub.

The Solution

Firstly I cloned the Microsoft Process Migrator and built it as per the instructions on the repo.

I created a config file and then ran the tool. On one organisation it ran fine. However on another I had errors like:

[ERROR] [2018-11-26T14:35:44.880Z] Process import validation failed. Process with same name already exists on target account.
[ERROR] [2018-11-26T14:39:54.206Z] Import failed, see log file for details. Create field ‘Location’ failed, see log for details

This was because I had in the past manually duplicated the inherited process template onto this organisation, so there was a process with the same name and fields of the same names.

The first error was easy to fix, import the template with a new (temporary) name.

The second is more problematic. I had two choice

As I only had a few duplicated unused fields on a single organisation I picked the former. If I had many organisations to sort out I would picked the latter.

So my process ended up being

  1. Run the Microsoft Process Migrator to migrate ‘My Process’ on the source organisation to ‘My Process 1’ on the target organisation
  2. It gave an error, providing the name of the duplicated field
  3. I checked on the target organisation using a work item query that the field was empty or only had defaulted data (if it had not been I would have used Martin’s tool to migrate the data to a temporary field and then deleted the problem field, moving the data back to the correct field from the temporary field when the import of the process template was completed)
  4. I deleted the field from the work item type that referenced it
  5. I deleted the field
  6. I deleted the process template ‘My Process 1’, a failed import leaves a half created process
  7. I went back to step 1 and repeated until the import completed without error
  8. I tested my migrated inherited process was OK
  9. On the target organisation I then renamed ‘My Process’ to ‘My Process – Old’
  10. I then renamed ‘My Process 1’ to ‘My Process’
  11. In my case I also made ‘My Process’ as the default, you might not do this if another process is the default, but step 13 does require the process template is not the default
  12. I moved all the team projects using the process template now called ‘My Process – Old’ to ‘My Process’
  13. I was then able to delete the process template ‘My Process – Old’ as it has no associated team projects and was not the default

As I customise my primary organisation’s process templates I can repeat this process to keep the processes in sync between organisations.  Note that in future migrations I won’t have to do steps 2..6 as there are no manually created duplicated fields. So it should be more straight forward.

So a valid solution until any similar functionality is built into Azure DevOps, and there is no sign of that on the roadmap.

DPI problems after upgrading from Camtasia 8 to 2018

This is another of those posts I do so I don’t forget how I fixed something.

I have a requirement to record videos for a client in 720p resolution. As I use as SurfaceBook with a High-Res screen I have found the best way to do this is set my Windows screen resolution to 1280×720 and do all my recording at this as native resolution. Any attempt to record smaller portions of a screen or scale video in production have lead to quality problems, especially as remote desktops within remote desktops are required.

This has been working fine with Camtasia 8, but when I upgrade to Camtasia  2018.0.7 I got problems. The whole UI of the tool was unusable, it ignored the resizing/DPI changes.

The only fix I could find was to create a desktop shortcut to the EXE and set the Properties > Compatibility > Change high DPI settings > and check the ‘Override high DPI scaling behaviour’ and set this to ‘System’.

image

Even after doing this I still found the preview in the editing screen a little blurred, but usable. The final produced MP4s were OK.