Azure DevOps Services & Server Alerts DSL – an alternative to TFS Aggregator?

Whilst listening to a recent  Radio TFS it was mentioned that TFS Aggregator uses the C# SOAP based Azure DevOps APIs; hence needed a major re-write as these APIs are being deprecated.

Did you know that there was a REST API alternative to TFS Aggregator?

My Azure DevOps Services & Server Alerts DSL is out there, and has been for a while, but I don’t think used by many people. It aims to do the same as TFS Aggregator, but is based around Python scripting.

However, I do have to say it is more limited in flexibility as it has only been developed for my (and a few of my clients needs), but its an alternative that is based on the REST APIs. 

Scripts are of the following form, this one changes the state of a work item if all it children are done

import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv[0] == "workitem.updated" : 
    wi = GetWorkItem(int(sys.argv[1]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage("Work item '" + str(wi.id) + "' has no parent")
    else:
        LogInfoMessage("Work item '" + str(wi.id) + "' has parent '" + str(parentwi.id) + "'")

        results = [c for c in GetChildWorkItems(parentwi) if c["fields"]["System.State"] != "Done"]
        if  len(results) == 0 :
            LogInfoMessage("All child work items are 'Done'")
            parentwi["fields"]["System.State"] = "Done"
            UpdateWorkItem(parentwi)
            msg = "Work item '" + str(parentwi.id) + "' has been set as 'Done' as all its child work items are done"
            SendEmail("richard@blackmarble.co.uk","Work item '" + str(parentwi.id) + "' has been updated", msg)
            LogInfoMessage(msg)
        else:
            LogInfoMessage("Not all child work items are 'Done'")
else:
	LogErrorMessage("Was not expecting to get here")
	LogErrorMessage(sys.argv)

I have recently done a fairly major update to the project. The key changes are:

  • Rename of project, repo, and namespaces to reflect Azure DevOps (the namespace change is a breaking change for existing users)
  • The scripts that are run can now be
    • A fixed file name for the web instance running the service
    • Based on the event type sent to the service
    • Use the subscription ID, thus allowing many scripts (new)
  • A single instance of the web site running the events processor can now handle calls from many Azure DevOps instances.
  • Improved installation process on Azure (well at least tried to make the documentation clearer and sort out a couple of MSDeploy issues)

Full details are on the project can be seen on the solutions WIKI, maybe you will find it of use. Let me know if the documentation is good enough

Major new release of my VSTS Cross Platform Extension to build Release Notes

Today I have released a major new release, V2, of my VSTS Cross Platform Extension to build release notes. This new version is all down to the efforts of Greg Pakes who has completely re-written the task to use newer VSTS APIs.

A minor issue is that this re-write has introduced a couple of breaking changes, as detailed below and on the project wiki

  • oAuth script access has to be enabled on the agent running the task

image

  • There are minor changes in the template format, but for the good, as it means both TFVC and GIT based releases now use a common template format. Samples can be found in the project repo

Because of the breaking changes, we made the decision to release both V1 and V2 of the task in the same extension package, so not forcing anyone to update unless they wish to. A technique I have not tried before, but seems to work well in testing.

Hope people still find the task of use and thanks again to Greg for all the work on the extension

Using VSTS Gates to help improve my deployment pipeline of VSTS Extensions to the Visual Studio Marketplace

My existing VSTS CI/CD process has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using VSTS Release Gates; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests.

The problem is how to achieve this with the current VSTS gate options?

What did not work

My first thought was to use the Invoke HTTP REST API gate, calling the VSTS API https://<your vsts instance name>.visualstudio.com/_apis/distributedtask/tasks/<GUID of Task>. This API call returns a block of JSON containing details about the deployed task visible to the specified VSTS instance. In theory you can parse this data with a JSONPATH query in the gates success criteria parameter to make sure the correct version of the task is deployed e.g. eq($.value[?(@.name == “BuildRetensionTask”)].contributionVersion, “1.2.3”)

However, there is a problem. At this time the Invoke HTTP REST API gate task does not support the == equality operator in it’s success criteria field. I understand this will be addressed in the future, but the fact it is currently missing is a block to my current needs.

Next I thought I could write a custom VSTS gate. These are basically ‘run on server’ tasks with a suitably crafted JSON manifest. The problem here is that this type of task does not allow any code (Node.JS or PowerShell) to be run. They only have a limited capability to invoke HTTP APIs or write messages to service bus. So I could not implement the code I needed to process the API response. So another dead end.

What did work

The answer, after a suggestion from the VSTS Release Management team at Microsoft, was to try the Azure Function gate.

To do this I created a new Azure Function. I did this using the Azure Portal, picking the consumption billing model, C# and securing the function with a function key, basically the default options.

I then added the C# function code (stored in GitHub), to my newly created Azure Function. This function code takes

  • The name of the VSTS instance
  • A personal access token (PAT) to access the VSTS instance
  • The GUID of the task to check for
  • And the version to check for

It then returns a JSON block with true or false based on whether the required task version can be found. If any of the parameters are invalid an API error is returned

By passing in this set of arguments my idea was that a single Azure Function could be used to check for the deployment of all my tasks.

Note: Now I do realise I could also create a release pipeline for the Azure Function, but I chose to just create it via the Azure Portal. I know this is not best practice, but this was just a proof of concept. As usual the danger here is that this proof of concept might be one of those that is too useful and lives forever!

To use the Azure Function

Using the Azure function is simple

    • Added an Azure Function gate to a VSTS release
    • Set the URL parameter for the Azure Function. This value can be found from the Azure Portal. Note that you don’t need the Function Code query parameter in the URL as this is provided with the next gate parameter. I chose to use a variable group variable for this parameter so it was easy to reuse between many CD pipelines
    • Set the Function Key parameter for the Azure Function, again you get this from the Azure Portal. This time I used a secure variable group variable
    • Set the Method parameter to POST
    • Set the Header content type as JSON
{
      "Content-Type": "application/json"
}
    • Set the Body to contain the details of the VSTS instance and Task to check. This time I used a mixture of variable group variables, release specific variables (the GUID) and environment build/release variables. The key here is I got the version from the primary release artifact $(BUILD.BUILDNUMBER) so the correct version of the tasks is tested for automatically
{
     "instance": "$(instance)",
     "pat": "$(pat)",
     "taskguid": "$(taskGuid)",
     "version": "$(BUILD.BUILDNUMBER)"
}
  • Finally set  the Advanced/Completion Event to ApiResponse with the success criteria of
    eq(root['Deployed'], 'true')

Once this was done I was able to use the Azure function as a VSTS gate as required

image

Summary

So I now have a gate that makes sure that for a given VSTS instance a task of a given version has been deployed.

If you need this functionality all you need to do is create your own Azure Function instance, drop in my code and configure the VSTS gate appropriately.

When equality == operator becomes available for JSONPATH in the REST API Gate I might consider a swap back to a basic REST call, it is less complex to setup, but we shall see. The Azure function model does appear to work well

New release of my ‘Generate Parameters.xml tools to add support for app.config files

I recently released an updated version of my Generate Parameters.XML tool for Visual Studio. This release adds support for generating parameters.xml files from app.config files as well as web.config files

Why you might ask why add support for app.config files when the parameters.xml model is only part of WebDeploy?

Well, at Black Marble we like the model of updating a single file using a tokenised set of parameters from within our DevOps CI/CD pipelines. It makes it easy to take release variables and write them, at deploy time, into a parameters.xml file to be injected into a machine’s configuration. We wanted to extend this to configuring services and the like where for example a DLL based service is configured with a mycode.dll.config file

The injection process of the parameters.xml into a web.config file is automatically done as part of the WebDeploy process (or a VSTS extension wrapping WebDeploy), but if you want to use a similar model for app.config files then you need some PowerShell.

For example, if we have the app.config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <applicationSettings>
     <Service.Properties.Settings>
       <setting name="Directory1" serializeAs="String">
         <value>C:\ABC1111</value>
       </setting>
       <setting name="Directory2" serializeAs="String">
         <value>C:\abc2222</value>
       </setting>
     </Service.Properties.Settings>
   </applicationSettings>
   <appSettings>
     <add key="AppSetting1" value="123" />
     <add key="AppSetting2" value="456" />
   </appSettings>
     <startup> 
         <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6" />
     </startup>
</configuration>

My extension generates a tokenised parameters.xml file

<parameters>
  <parameter name="AppSetting1" description="Description for AppSetting1" defaultvalue="__APPSETTING1__" tags="">
     <parameterentry kind="XmlFile" scope="\\App.config$" match="/configuration/appSettings/add[@key='AppSetting1']/@value" />
  </parameter>
  <parameter name="AppSetting2" description="Description for AppSetting2" defaultvalue="__APPSETTING2__" tags="">
     <parameterentry kind="XmlFile" scope="\\App.config$" match="/configuration/appSettings/add[@key='AppSetting2']/@value" />
  </parameter>
  <parameter name="Directory1" description="Description for Directory1" defaultvalue="__DIRECTORY1__" tags="">
     <parameterentry kind="XmlFile" scope="\\App.config$" match="/configuration/applicationSettings/Service.Properties.Settings/setting[@name='Directory1']/value/text()" />
  </parameter>
  <parameter name="Directory2" description="Description for Directory2" defaultvalue="__DIRECTORY2__" tags="">
     <parameterentry kind="XmlFile" scope="\\App.config$" match="/configuration/applicationSettings/Service.Properties.Settings/setting[@name='Directory2']/value/text()" />
  </parameter>
 </parameters>

The values in this parameters.xml file can be updated using a CI/CD replace tokens task, we use Colin’s ALM Corner Build & Release Tools, Replace Tokens, in exactly the same way as we would for a web.config

Finally the following PowerShell can be used to update the app.config from this parameters.xml

Thus giving a consistent way of updating configuration files for both web.config and app.config files

Creating test data for my Generate Release Notes Extension for use in CI/CD process

As part of the continued improvement to my CI/CD process I needed to provide a means so that whenever I test my Generate Release Notes Task, within it’s CI/CD process, new commits and work item associations are made. This is required because the task only picks up new commits and work items since the last successful running of a given build. So if the last release of the task extension was successful then the next set of tests have no associations to go in the release notes, not exactly exercising all the code paths!

In the past I added this test data by hand, a new manual commit to the repo prior to a release; but why have a dog and bark yourself? Better to automate the process.

This can done using a PowerShell file, run inline or stored in the builds source repo and run within a VSTS build. The code is shown below, you can pass in the required parameters, but I set sensible default for my purposes

For this PowerShell code to work you do need make some security changes to allow the build agent service user to write to the Git repo. This is documented by Microsoft.

The PowerShell task to run this code is placed in a build as the only task

image

This build is then triggered as part of the release process

image

Note that the triggering of this build has to be such that it runs on a non-blocking build agent as discussed in my previous posts. In my case I trigger the build to add the extra commits and work items just before triggering the validation build on my private Azure hosted agent.

Now, there is no reason you can’t just run the PowerShell directly within the release if you wanted to. I chose to use a build so that the build could be reused between different VSTS extension CI/CD pipelines; remember I have two Generate Release Note Extensions, PowerShell and NodeJS Based.

So another step to fully automating the whole release process.

Announcing a new VSTS Extension for Starting and Stopping Azure DevTest Labs VMs

Background

I have recently been posting on using Azure to host private VSTS build/release agents to avoid agent queue deadlocking issues with more complex release pipelines.

One of the areas discussed is reducing cost of running a private agent in Azure by only running the private agent within a limited time range, when you guess it might be needed. I have done this using DevTest Labs Auto Start and Auto Stop features. This works, but is it not better to only start the agent VM when it is actually really needed, not when you guess it might be? I need this private agent only when working on my VSTS extensions, not something I do everyday. Why waste CPU cycles that are never used?

New VSTS Extension

I had expected there would already be a VSTS  extension to Start and Stop DevTest Lab VMs, but the Microsoft provided extension for DevTest Labs only provides tasks for the creation and deletion of VMs within a lab.

So I am pleased to announce the release of my new DevTest Labs VSTS Extension to fill this gap, adding tasks to start and stop a DevTest Lab VM on demand from within a build or a release.

My Usage

I have been able to use the tasks in this extension to start my private Azure hosted agent only when I need it for functional tests within a release.

However, they could equally be used for a variety of different testing scenarios where any form of pre-built/configured VMs needs to be started or stopped as opposed to slower processes of creating/deploying a new deployment of a DevTest lab VM.

In may case I added an extra agent phases to my release pipeline to start the VM prior to it being needed.

image

I could also have used another agent phase to stop the VM once the tests were completed. However, I made the call to leave the VM running and let DevTest Labs’ Auto Stop shut it down at the end of the day. The reason for this is that VM start up and shutdown is still fairly slow, a minute or two, and I often find I need to run a set of function tests a few times during my development cycle; so it is a bit more efficient to leave the VM running until the end of the day. Only taking the start-up cost once.

You may have course have different needs, hence providing both the Start and Stop Tasks

Development

This new extension aims to act as a supplement to the Microsoft provided Azure DevTest Lab Extension. Hence to make development and adoption easier, it uses exactly the same source code structure and task parameters as the Microsoft provided extension. The task parameters being:

  • Azure RM Subscription – Azure Resource Manager subscription to configure before running.
  • Source Lab VM ID – Resource ID of the source lab VM. The source lab VM must be in the selected lab, as the custom image will be created using its VHD file. You can use any variable such as $(labVMId), the output of calling Create Azure DevTest Labs VM, that contains a value in the form /subscriptions/{subId}/resourceGroups/{rgName}/providers/Microsoft.DevTestLab/labs/{labName}/virtualMachines/{vmName}.

The issue I had was that the DevTest Labs PowerShell API did not provide a command to start or stop a VM in a lab. I needed to load the Azure PowerShell library to use the Invoke-AzureRmResourceAction  command. This requires you first call Login-AzureRmAccount to authenticate prior to calling the actual Invoke-AzureRmResourceAction required. This required a bit of extra code to get and reuse the AzureRM endpoint to find the authentication details.

# Get the parameters
$ConnectedServiceName = Get-VstsInput -Name "ConnectedServiceName"
# Get the end point from the name passed as a parameter
$Endpoint = Get-VstsEndpoint -Name $ConnectedServiceName -Require
# Get the authentication details
$clientID = $Endpoint.Auth.parameters.serviceprincipalid
$key = $Endpoint.Auth.parameters.serviceprincipalkey
$tenantId = $Endpoint.Auth.parameters.tenantid
$SecurePassword = $key | ConvertTo-SecureString -AsPlainText -Force
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $clientID, $SecurePassword
# Authenticate
Login-AzureRmAccount -Credential $cred -TenantId $tenantId -ServicePrincipal

Important to note that for this code to work you have to set the task’s task.json to run PowerShell3 and package the Powershell VSTS API module in with the task.

"execution": {
  "PowerShell3": {
     "target": "$(currentDirectory)\\StartVM.ps1",
     "argumentFormat": "",
     "workingDirectory": "$(currentDirectory)"
    }
  }

If the folder structure is correct changing to PowerShell3 will automatically load the required module from the tasks ps_module folder

In Summary

I have certainly found this extension useful, and I have leant more that I had expect I would about VSTS endpoints and Azure authentication.

Hope it is useful to you too.

Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI

Updates


As I posted recently I have been trying to add more functional tests to the VSTS based release CI/CD pipeline for my VSTS Extensions, and as I noted depending on how you want to run your tests e.g. trigger sub-builds, you can end up with scheduling deadlocks where a single build agent is scheduling the release and trying to run a new build. The answer is to use a second build agent in a different agent pool e.g. if the release is running on the Hosted build agent use a private build agent for the sub-build, or of course just pay for more hosted build instances.

The problem with a private build agent is where to run it. As my extensions are a personal project I don’t have a corporate Hyper-V server to run any extra private agents on, as I would have for an company projects. My MVP MSDN Azure benefits are the obvious answer, but I want any agents to be cheap to run, so I don’t burn through all my MSDN credits for a single build agent.

To this end I created a Windows Server 2016 VM in DevLabs (I prefer to create my VMs in DevLabs as it makes it easier tidying up of my Azure account) using an A0 sizing VM. This is tiny so cheap; I don’t intend to ever do a build on this agent, just schedule releases, so need to install few if any tools, so the size should not be an issue. To further reduce costs I used the auto start and stop features on the VM so it is only running during the hours I might be working. So I get an admittedly slow and limited private build agent but for less that $10 a month.

As the VM is small it makes sense to not run a GUI. This means when you RDP to the new VM you just get a command prompt. So how do you get the agent onto the VM and setup? You can’t just open a browser to VSTS or cut and paste a file via RDP, and I wanted to avoid the complexity of having to open up PowerShell remoting on the VM.

The process I used was as follows:

  1. In VSTS I created a new Agent Pool for my Azure hosted build agents
  2. In the Azure portal, DevLabs I created a new Windows Server 2016 (1709) VM
  3. I then RDP’d to my new Azure VM, in the open Command Prompt I ran PowerShell
    powershell
  4. As I was in my users home directory, I  cd’d into the downloads folder
    cd downloads
  5. I then ran the following PowerShell command to download the agent (you can get the current URI for the agent from your VSTS Agent Pool ‘Download Agent’ feature, but an old version will do as it will auto update.
    invoke-webrequest -UseBasicParsing -uri https://github.com/Microsoft/vsts-agent/releases/download/v2.124.0/vsts-agent-win7-x64-2.124.0.zip -OutFile vsts-agent-win7-x64-2.124.0.zip
  6. You can then follow the standard agent setup instructions from the VSTS Agent Pool ‘Download Agent’ feature
    mkdir \agent ; cd \agent
    PS
    Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory(“$HOME\Downloads\vsts-agent-win7-x64-2.124.0.zip”, “$PWD”)
  7. I then configured the agent to run as a service, I exited back to the command prompt to do this this, so the commands were
    exit
    config.cmd

I now had an other build agent pool to use in my CI/CD pipelines at a reasonable cost, and the performance was not too bad either.


	

Major update to my CI/CD process for VSTS extensions

As time passes I have found there is a need for more cross platform VSTS extensions as there is more uptake of VSTS beyond it’s historic Microsoft platform based roots.

Historically most of my extensions have been Powershell based. Now this is not a fundamental problem for cross platform usage. this is due to the availability of  Powershell Core. However, Typescript and Node.JS is a better fit I think in many cases. This has caused me to revise the way I structure my repo and build my VSTS extensions to provide a consistent understandable process. My old Gulp based process  for Typescript was too complex and inconsistent between tasks, it even confused me!  My process revisions have been documented in my vNextBuild Github’s WIKI, so I don’t propose too repeat the bulk of the content here.

That said, it is worth touching on why I did not use YO VSTS. If I were starting this project again I would certainly look at this tool to build out my basic repo structure. Also I think I would look at a separate repo per VSTS extension, as opposed to putting them all in one repo. However, this project pre-dates the availability of  YO VSTS , hence the structure I have is different. Also as people have forked the repo I don’t intend to introduce breaking changes by splitting the repo. That all said my Typescript/Node.JS process is heavily influenced by the structures and NPM script used in YO VSTS, so people should find the core processes familiar i.e.

  1. At the command prompt
  2. CD into an extension folder you wish to build (this contains the package.json file)
  3. Get all the NPM packages npm install as defined in the package.json
  4. Run TSLink and transpile the TypeScript to Node.JS. There is a single command npm run build to do this, but they can be rerun individually using npm run lint and npm run transpile
  5. Run an unit tests you have npm run test-no-logging (Note the npm run test script is used by the CI/CD process to dump the test results for uploading to VSTS logging. I tend to find when working locally that just dumping the test results to the console is enough.
  6. Package the the resultant .JS files npm run package. This script does two jobs, it remove any NPM modules that are set as -savedev i.e. only used in development and not production and also copies the required files to the correct locations to be packaged into a VSIX file. NOTE after this command is run you need to rerun npm install to reinstall the development NPM packages prior to be able to run tests etc. locally

This is complement by an enhance VSTS CI/CD process that has a far greater focus on automated testing as well as packaging the VSIX files and releasing to the VSTS marketplace

image

  Check the vNextBuild Github’s WIKI for more details

New Cross Platform version of my Generate Release Notes VSTS Extension

My Generate Release Notes VSTS extension has been my most popular by a long way. I have enhanced it, with the help of others via pull requests, but there have been two repeating common questions that have not been resolved

  1. Is it cross platform?
  2. Why does it show different work items and commit associations to the VSTS Release Status UI?

Well the answer to the first is that the core of the logic for the extension came from a PowerShell script we used internally, so PowerShell was the obvious first platform, especially as though my PowerShell skills are not great, my Node was weaker!

The second issue is due to my original extension and VSTS’s UI doing very different things. My old extension was based around inspecting build results, so when working in a release it finds all the builds between the current release and last successful one and looks at the details of each build in turn, building a big list or changes. VSTS’s Release summary UI does not do this, it make a few current undocumented ‘compare this to that’ API calls to get the lists.

In an attempt to address both these questions I have over the past few weeks created a new Cross Platform Generate Release Notes Extension. Now don’t worry, the old one is still there and supported, they do different jobs. This new extension is cross platform and tries to use the same API calls the VSTS Release summary UI uses.

There are of course a few gotchas

  • I did have to adopt a work around for TFVC changeset history as Microsoft use an old internal API call, but that that was the only place I had to do this. So apologies if there are any differences in the changesets returned.
  • The template format is very similar to that used in my original Generate Release Notes VSTS extension, but due to the change from PowerShell to Node I had to move from $($widetail.fields.’System.Title’) style to ${widetail.fields[‘System.Title’]}

So I hope people find this new extension useful, I can now go off happily closing old Issues in GitHub

How you can keep using Lab Management after a move to VSTS (after a fashion)

I have posted on previously how we used TFS Lab Management to provision our test and development environments. With our move to VSTS, where Lab Management does not exist, we needed to look again at how to provision these labs. There are a few options…

Move to the Cloud – aka stop using Lab Management

Arguably the best option is to move all your lab VMs up to the cloud. Microsoft even has the specific service to help with this Azure DevTest Labs. This service allows you to create single VMs or sets of VMs for more complex scenarios using of ARM templates.

All good it seems, but the issue is that adoption of a cloud solution moves the cost of running the lab from a capital expenditure (buying the VM host server) to an operational cost (monthly cloud usage bill). This can potentially be a not insignificant sum; in our case we have up to 100 test VMs of various types running at any given time. A sizeable bill.

Also we need to consider that this is a different technology to Lab management, so we would need to invest time to rebuild our test environments using newer technologies such as ARM, DSC etc. A thing we should be doing, but I would to avoid doing it for all our projects today.

Now it is fair to say that we might not need all the VMs keep running all the time, better VM management could help alleviate the costs, and DevTest Labs has tools to help here, but it won’t remove all the costs.

So is there a non-cloud way?

Move to Systems Center

Microsoft’s current on premises recommended solution is to use System Center, using tasks within your build and release pipeline to trigger events via SC-VMM.

Now as Lab Management also makes use of System Center SC-VMM this might initially sound a reasonable step. Problem is that the way Lab Management uses System Center is ‘special’. It does not leverage any of the standard System Center tools really. Chances are anyone who investing time in using Lab Management makes little or no use of System Center own tools directly.

So if you want to use System Center without Lab Management you need to work in a very different way. You are into the land of System Center orchestrations etc.

So again you are looking at a new technology, this might be appealing for you, especially if you are using System Center to manage your on premised IT estate, but it was not a route I wanted to take.

Keeping Lab Management running

So the short term answer for us was to keep our Lab Management system running, it does what we need (network isolation the key factor for us), we have a library of ‘standard VMs’ built and we have already paid for the Hyper-V hosts. So the question became how to bridge the gap to VSTS?

Step 1 – Leave Lab Management Running

When we moved to VSTS we made the conscious choice to leave our old TFS 2015.3 server running. We removed access for most users, only leaving access for those who needed to manage Lab Management. This provided us with a means to start, stop, deploy  network isolated Lab Environments.

KEY POINT HERE – The only reason our on-premised TFS server is running is to allow a SC-VMM server and a Test Controller to connect to it to allow Lab Management operations.

image

Another important fact to remember is that network isolation in each labs is enabled by the Lab Test Agents running on the VMs in the Lab; so as well as communicating with the Test Controller the agents in the environments also manage the reconfiguration of the VMs network adapters to provide the isolation. Anything we do at this point has to be careful not to ‘mess up’ this network configuration.

Problem is you also use this Test Agent to run your tests, how do you make sure the Test Agent runs the right tests and send the results to the right place?

We had already had to build some custom scripts to get these agents to work the TFS vNext build against the on-prem TFS server. We were going to need something similar this time too. The key was we needed to be able to trigger tests in the isolated environment and get the results back out and up to VSTS all controlled within a build and release pipeline.

We came up with two options.

Option 1 Scripts

First option is to do everything with PowerShell Scripts Tasks within the release process.

image

  1. Copy the needed files onto the VM using the built in tasks
  2. Use PowerShell remoting to run MSTest (previously installed on the target VM) – remember you have to delete any existing .TRX result file by hand, it won’t overwrite.
  3. Copy the test results back from the VM (RoboCopy again)
  4. Publish the test results TRX file using the standard VSTS build task for that job.

There is nothing too complex, just a couple of PowerShell scripts, and it certainly does not effect the network isolation.

However, there is a major issue if you want to run UX tests. MSTest is running on a background thread, so your test will fail it cannot access the UI thread.

That said, this is a valid technique as long as either

  • Your tests are not UX based e.g. integration tests that hit an API
  • You can write your UX test to use Selenium PhantomJS

Option 2 do it the ‘proper’ VSTS way

VSTS has tasks built in to deploy a Test Agent to a machine and run tests remotely, including UX tests. The problem was I had assumed these tasks could not be used as they would break the network isolation, but I thought I would give it  try anyway. That is what test labs are for!

image

Inside my release pipeline I added

  1. Copy the needed files onto the VM using the built in tasks, as before
  2. A deploy Test Agent Task
  3. Run functional tests Task, which also handles the publish

When this was run the deploy Test Agent task de-configures (and removes) the old TFS 2015 Test Agent put on by Lab Management and installs the current version. However, and this is important, it does not break the network isolation as this is all setup during VM boot and/or repair. The Lab will report itself a broken in the Lab Management UI as the Test Agent will not be reporting to the Test Controller, but it is still working

Once the new agent is deployed, it can be used to run the test and the results get published back to VSTS, whether they be UX tests or not.

If you restart, redeploy, or repair the Network Isolated environment the 2015 Test Agent gets put back in place, so making sure the network isolation is fixed.

Conclusion

So Option 2 seems to deliver what I needed for now

  • I can use the old tech to manage the deployment of the VMs
  • and use the new tech to run my tests and get the results published to the right place.

Now this does not means I should not be looking at DevTest Labs to replace some of my test environments, also Azure Stack might provide an answer in the future.

But for now I have a workable solution that protects my past investments while I move to a longer term future plan.