But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

You never know how people will use a tool

You never know how people will use a tool once it is out ‘in the wild’. I wrote my Generate Release Notes VSTS extension to generate markdown files, but people have attempted to use it in other ways.

I realised, via an issue raised on Github, that it can also be used, without any code changes, to generate other formats such as HTML. The only change required is to provide an HTML based template as opposed to markdown one.

I have added suitable samples to the wiki and repo

How you can keep using Lab Management after a move to VSTS (after a fashion)

I have posted on previously how we used TFS Lab Management to provision our test and development environments. With our move to VSTS, where Lab Management does not exist, we needed to look again at how to provision these labs. There are a few options…

Move to the Cloud – aka stop using Lab Management

Arguably the best option is to move all your lab VMs up to the cloud. Microsoft even has the specific service to help with this Azure DevTest Labs. This service allows you to create single VMs or sets of VMs for more complex scenarios using of ARM templates.

All good it seems, but the issue is that adoption of a cloud solution moves the cost of running the lab from a capital expenditure (buying the VM host server) to an operational cost (monthly cloud usage bill). This can potentially be a not insignificant sum; in our case we have up to 100 test VMs of various types running at any given time. A sizeable bill.

Also we need to consider that this is a different technology to Lab management, so we would need to invest time to rebuild our test environments using newer technologies such as ARM, DSC etc. A thing we should be doing, but I would to avoid doing it for all our projects today.

Now it is fair to say that we might not need all the VMs keep running all the time, better VM management could help alleviate the costs, and DevTest Labs has tools to help here, but it won’t remove all the costs.

So is there a non-cloud way?

Move to Systems Center

Microsoft’s current on premises recommended solution is to use System Center, using tasks within your build and release pipeline to trigger events via SC-VMM.

Now as Lab Management also makes use of System Center SC-VMM this might initially sound a reasonable step. Problem is that the way Lab Management uses System Center is ‘special’. It does not leverage any of the standard System Center tools really. Chances are anyone who investing time in using Lab Management makes little or no use of System Center own tools directly.

So if you want to use System Center without Lab Management you need to work in a very different way. You are into the land of System Center orchestrations etc.

So again you are looking at a new technology, this might be appealing for you, especially if you are using System Center to manage your on premised IT estate, but it was not a route I wanted to take.

Keeping Lab Management running

So the short term answer for us was to keep our Lab Management system running, it does what we need (network isolation the key factor for us), we have a library of ‘standard VMs’ built and we have already paid for the Hyper-V hosts. So the question became how to bridge the gap to VSTS?

Step 1 – Leave Lab Management Running

When we moved to VSTS we made the conscious choice to leave our old TFS 2015.3 server running. We removed access for most users, only leaving access for those who needed to manage Lab Management. This provided us with a means to start, stop, deploy  network isolated Lab Environments.

KEY POINT HERE – The only reason our on-premised TFS server is running is to allow a SC-VMM server and a Test Controller to connect to it to allow Lab Management operations.

image

Another important fact to remember is that network isolation in each labs is enabled by the Lab Test Agents running on the VMs in the Lab; so as well as communicating with the Test Controller the agents in the environments also manage the reconfiguration of the VMs network adapters to provide the isolation. Anything we do at this point has to be careful not to ‘mess up’ this network configuration.

Problem is you also use this Test Agent to run your tests, how do you make sure the Test Agent runs the right tests and send the results to the right place?

We had already had to build some custom scripts to get these agents to work the TFS vNext build against the on-prem TFS server. We were going to need something similar this time too. The key was we needed to be able to trigger tests in the isolated environment and get the results back out and up to VSTS all controlled within a build and release pipeline.

We came up with two options.

Option 1 Scripts

First option is to do everything with PowerShell Scripts Tasks within the release process.

image

  1. Copy the needed files onto the VM using the built in tasks
  2. Use PowerShell remoting to run MSTest (previously installed on the target VM) – remember you have to delete any existing .TRX result file by hand, it won’t overwrite.
  3. Copy the test results back from the VM (RoboCopy again)
  4. Publish the test results TRX file using the standard VSTS build task for that job.

There is nothing too complex, just a couple of PowerShell scripts, and it certainly does not effect the network isolation.

However, there is a major issue if you want to run UX tests. MSTest is running on a background thread, so your test will fail it cannot access the UI thread.

That said, this is a valid technique as long as either

  • Your tests are not UX based e.g. integration tests that hit an API
  • You can write your UX test to use Selenium PhantomJS

Option 2 do it the ‘proper’ VSTS way

VSTS has tasks built in to deploy a Test Agent to a machine and run tests remotely, including UX tests. The problem was I had assumed these tasks could not be used as they would break the network isolation, but I thought I would give it  try anyway. That is what test labs are for!

image

Inside my release pipeline I added

  1. Copy the needed files onto the VM using the built in tasks, as before
  2. A deploy Test Agent Task
  3. Run functional tests Task, which also handles the publish

When this was run the deploy Test Agent task de-configures (and removes) the old TFS 2015 Test Agent put on by Lab Management and installs the current version. However, and this is important, it does not break the network isolation as this is all setup during VM boot and/or repair. The Lab will report itself a broken in the Lab Management UI as the Test Agent will not be reporting to the Test Controller, but it is still working

Once the new agent is deployed, it can be used to run the test and the results get published back to VSTS, whether they be UX tests or not.

If you restart, redeploy, or repair the Network Isolated environment the 2015 Test Agent gets put back in place, so making sure the network isolation is fixed.

Conclusion

So Option 2 seems to deliver what I needed for now

  • I can use the old tech to manage the deployment of the VMs
  • and use the new tech to run my tests and get the results published to the right place.

Now this does not means I should not be looking at DevTest Labs to replace some of my test environments, also Azure Stack might provide an answer in the future.

But for now I have a workable solution that protects my past investments while I move to a longer term future plan.

New version of my Parameters.Xml Generator Visual Studio add-in now supports VS2017 too

I have just published Version 1.5 of my Parameters.Xml Generator Visual Studio add-in . After much fiddling this VSIX now supports VS2017 as well as VS2013 and VS2015.

The complexity was that VS2017 uses a new VSIX format, V3. You have to makes changes to the project that generates the VSIX and to the VSIX manifest too. The FAQ says you can do this within VS2015 my hand, but I had no luck getting it right. The recommended option, and the method I used, is to upgrade your solution to VS2017 (or the RC at the time of writing as the product has not RTM’d yet).

This upgrade process is a one way migration, and you do have to check/edit some key items

  • Get all your references right, as I was using the RC of VS2017  this meant enabling the use of Preview packages from Nuget in the solution.
  • Makes sure the install targets (of Visual Studio) match what you want to install too
  • Add prerequisites (this is the big new addition in the VSIX 3 format)
  • And the one that stalled me for ages – Make sure you reference the right version of the Microsoft.VisualStudio.Shell.<VERSION>.dll . You need to pick the one for the oldest version of Visual Studio you wish to target. In my case this was Microsoft.VisualStudio.Shell.12.0.dll. For some reason during the migration this got changed to Microsoft.VisualStudio.Shell.14.0.dll which gave the strange effect that the VSIX installed on 2013, 2015 and 2017. But in 2013, though I could see menu item, it did not work. This was fixed by referencing the 12.0 DLL .

Can’t add users to a VSTS instance backed by an Azure Directory

I have a VSTS instance that is backed by an Azure Directory. This is a great way to help secure a VSTS instance, only users in the Azure Directory can be added to VSTS, not just any old MSA (LiveIDs). This is a directory that can be shared with any other Azure based services such as O365, and centrally managed and linked to an on-premises Active Directory.

When I tried to add a user to VSTS, one that was a valid user in the Azure Directory, their account did not appear in the available users drop down.

 image

Turns out the problem was who I was logged in as. As yo can see from the screenshot I have three Richard accounts in the VSTS instance (and Azure Directory), a couple of MSAs and a guest work account from another Azure Directory. I was logged in as the guest work account.

All three IDs as administrators in VSTS, but it turned out I needed to be logged in as the MSA that owned the Azure subscription contains the Azure Directory. As soon as I used this account the dropdown populated as expected and I could add the users from the Azure Diretcory

image

Version 2.0.x of my Generate Release Notes VSTS Task has been released with release rollup support

I have just released a major update to my Generate Release Notes VSTS Build extension. This V2 update adds support to look back into past releases to find when there was a successful release to a given stage/environment and creates a rollup set of build artifacts, and hence commits/changesets and workitems, in the release notes.

 

 

This has been a long running request on GitHub for this extension which I am pleased to have been able to address.

To aid backwards compatibility, the default behaviour of the build/release tasks is as it was before, it can be used in a build or in and release, and if in a release it only consider the artifacts in the current release that ran the task.

If you want to use the new features you need to enable them. This is all on the advanced properties

 

image

 

You get new properties to enable scanning past releases until the task find a successful deployment to, by default, the same stage/environment that is currently being released too. You can override this stage name to allow more complex usage e.g. generating the releases notes for what is changed since the last release to production whist in a UAT environment.

This change also means there is new variable that can be accessed in templates, this $Releases which contains all the releases being used to get build artifacts. This can be used on release notes to show the releases being used e.g.

 

**Release notes for release $defname**
**Release Number**  : $($release.name)   
**Release completed** $("{0:dd/MM/yy HH:mm:ss}" -f [datetime]$release.modifiedOn) **Changes since last successful release to '$stagename'**  
**Including releases:**  
$(($releases | select-object -ExpandProperty name) -join ", " )  

 

Generating a content

 

Release notes for release Validate-ReleaseNotesTask.Master
Release Number : Release-69 
Release completed 05/01/17 12:40:19
Changes since last successful release to 'Environment 2' 
Including releases: 
Release-69, Release-68, Release-67, Release-66 

 

Hope you find this extension useful

A nice relaxing Christmas break (and by the way I migrated our on-premises TFS to VSTS as well)

Over the Christmas break I migrated our on premises TFS 2015 instance to VSTS. The reason for the migration was multi-fold:

  • We were blocked on moving to TFS 2017 as we could not easily upgrade our SQL cluster to SQL 2014
  • We wanted to be on the latest, greatest and newest features of VSTS/TFS
  • We wanted to get away from having to perform on-premises updates every few months

To do the migration we used the public preview of the TFS to VSTS Migrator.

So what did we learn?

The actual import was fairly quick, around 3 hours for just short of 200Gb of TPC data. However, getting the data from our on-premises system up to Azure was much slower, constrained by the need to copy backups around our LAN and our Internet bandwidth to get the files to Azure storage, a grand total of more like 16 hours. But remember this was mostly spent watching various progress bars after running various commands; so I was free to enjoy the Christmas break, I was not a slave to a PC.

This all makes it sound easy, and to be honest the actual production migration was, but this was only due to doing the hard work prior to the Christmas break during the dry run phase. During the dry run we:

  • Addressed the TFS customisations that needed to be altered/removed
  • Sorted the AD > AAD sync mappings for user accounts
  • Worked out the backup/restore/copy process to get the TPC data to somewhere VSTS could import it from
  • Did the actual dry run migration
  • Tested the dry run instance after the migrate to get a list of what else needed addressing and anything our staff would have to do to access the new VSTS instance
  • Documented (and scripted where possible) all the steps
  • Made sure we had fall back processes in place if the migration failed.

And arguably most importantly, discovered how long each step would take so we could set expectations. This was the prime reason for picking the Christmas break as we knew we could have a number of days where there should be no TFS activity (we close for an extended period) hence de-risking the process to a great degree. We knew we could get the migration done over weekend, but a weeks break was easier, more relaxed, Christmas seemed a timely choice.

You might ask the question ‘what did not migrate?’

Well a better question might be ’what needed changing due to the migration?’

It was not so much items did not migrate, just they are handled a bit differently in VSTS. The list of areas we needed to address were

  • User Licensing – we needed to make sure your user’s MSDN subscription are mapped to their work IDs.
  • Build/Release Licensing – we needed to decide how many private build agents we really needed (not just spin up more on a whim as we had done with our on-premises TFS), they cost money on VSTS
  • Release pipeline – now these don’t migrate as of the time of writing, but I wrote a quick tool to get 95% of their content moved.  After using this tool we did then need to also edit the pipelines, re-entering ‘secrets’ which are not exported, before retesting them

But that was all the issues we had to address, everything else seems to be fine with users just changing the URL they connected to from on-premises to VSTS.

So if you think migrating your TFS to VSTS seems like a good idea, why not have a look at the blog post and video on  the Microsoft ALM Blog about the migration tool. Remember that this is a Microsoft Gold DevOps Partner led process, so please get in touch with us at Black Marble or me directly via this blog if you want a chat about the migrations or other DevOps service we offer.

My TFSAlertsDSL project has moved to GitHub and become VSTSServiceHookDsl

Introduction

A while ago I create the TFSAlertsDSL project to provide a means to script responses to TFS Alert SOAP messages using Python. The SOAP Alert technology has been overtaken by time with the move to Service Hooks.

So I have taken the time to move this project over to the newer technology, which is supported both on TFS 2015 (onwards) and VSTS. I also took the chance to move from CodePlex to GitHub and renamed the project to VSTSServiceHookDsl.

Note: If you need the older SOAP alert based model stick with the project on CodePlex, I don’t intend to update it, but all the source is there if you need it.

What I learnt in the migration

Supporting WCF and Service Hooks

I had intended to keep support for both SOAP Alerts and Service Hooks in the new project, but I quickly realised there was little point. You cannot even register SOAP based alerts via the UI anymore and it added a lot of complexity. So I decided to remove all the WCF SOAP handling.

C# or REST TFS API

The SOAP Alert version used the older TFS C# API, hence you had to distribute these DLLs with the web site. Whilst factoring I decided to swap all the TFS calls to using the new REST API. This provided a couple of advantages

  • I did not need to distribute the TFS DLLs
  • Many of the newer function of VSTS/TFS are only available via the REST API

    Exposing JObjects to Python

    I revised the way that TFS data is handed in the Python Scripts. In the past I hand crafted data transfer objects for consumption within the Python scripts. The problem with this way of working is that it cannot handle custom objects, customised work items are a particular issue. You don’t know their shape.

    I found the best solution was to just return the Newtonsoft JObjects that I got from the C# based REST calls. These are easily consumed in Python in the general form

    workitem["fields"]["System.State"] 


    Downside is that this change does mean that any scripts you had created for the old SOAP Alert version will need a bit of work when you transfer to the new Service Hook version.

    Create a release pipeline

    As per all good projects, I created a release pipeline for my internal test deployment. My process was as follows

    • A VSTS build that builds the code from Github this
      • Complies the code
      • Run all the unit test
      • Packages as an MSDeploy Package
    • Followed by a VSTS release that
      • Sets the web.config entries
      • Deploys the MSDeploy package to Azure
      • Then uses FTP to uploaded DSL DLL to Azure as it is not part of the package

    image 

    Future Steps

    Add support for more triggers

    At the moment the Service Hook project supports the same trigger events as the old SOAP project, with the addition of support Git Push triggers.

    I need to add in the handlers for all the older support triggers in VSTS/TFS, specifically the release related ones. I suspect these might be useful.

    Create an ARM template

    At the moment the deployment relies on the user creating the web site. It would be good to add an Azure Resource Management (ARM) Template to allow this site to be created automatically as part of the release process

    Summary

    So we have a nice new Python and Service Hook based framework to help manage your responses to Service Hook triggers for TFS and VSTS.

    If you think it might be useful to you why not have a look at https://github.com/rfennell/VSTSServiceHookDsl.

    Interested to hear your feedback  

  • Transform tool for transferring TFS 2015.3 Release Templates to VSTS

    If you are moving from on-premises TFS to VSTS you might hit the same problem I have just have. The structure of a VSTS releases is changing, there is now the concept of multiple ‘Deployment Steps’ in an environment. This means you can use a number of different agents for a single environment – a good thing.

    The downside this that if you export a TFS2015.3 release process and try to import it to VSTS it will fail saying the JSON format is incorrect.

    Of course you can get around this with some copy typing, but I am lazy, so….

    I have written a quick transform tool that converts the basic structure of the JSON to the new format. You can see the code as Github Gist

    It is a command line tool, usage is as follows

    1. In VSTS create a new empty release, and save it
    2. Use the drop down menu on the newly saved release in the release explorer and export the file. This is the template for the new format e.g. template.json
    3. On your old TFS system export the release process in the same way to get your source file e.g. source.json
    4. Run the command line tool providing the name of the template, source and output file

      RMTransform template.json source.json output.json
    5. On VSTS import the newly create JSON file release file.
    6. A release process should be created, but it won’t be possible to save it until you have fixed a few things that are not transferred
      1. Associated each Deployment step with Agent Pool
      2. Set the user accounts who will do the pre-and post approvals
      3. Any secret variable will need to be reentered
        IMPORTANT - Make sure you save the imported process as soon as you can (i.e. straight after fixing anything that is stopping it being saved). If you don't save and start clicking into artifacts or global variable it seems to loose everything and you need to re-import

    image

    It is not perfect, you might find other issues that need fixing, but it save a load of copy typing