But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Migrating work items to VSTS with custom fields using TFS Integration Platform

If you wish to migrate work items from TFS to VSTS your options are limited. You can of course just pull over work items, without history, using Excel. If you have no work item customisation them OpsHub is an option, but if you have work item customisation then you are going to have to use TFS Integration Platform. And we all know what a lovely experience that is!

Note: TFS Integration Platform will cease to be supported by Microsoft at the end of May 2016, this does not mean the tool is going away, just that there will be no support via forums.

In this post I will show how you can use TFS Integration platform to move over custom fields to VSTS, including the original TFS work item ID, this enabling migrations with history as detailed in my MSDN article

TFS Integration Platform Setup

Reference Assemblies

TFS Integration Platform being a somewhat old tool, design for TFS 2010, does not directly support TFS 2015 or VSTS. You have to select the Dev11 connection options (which is TFS 2012 by its internal code name). However, this will still cause problems as it fails to find all the assemblies it expects

The solution to this problem is provided in this post, the key being to add dummy registry entries

  1. Install either
  2. Add the following registry key after you have installed Team Explorer or equiv.
    Windows Registry Editor Version 5.00
    

    [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\11.0\InstalledProducts\Team System Tools for Developers]

    @="#101"

    "LogoID"="#100"

    "Package"="{97d9322b-672f-42ab-b3cb-ca27aaedf09d}"

    "ProductDetails"="#102"

    "UseVsProductID"=dword:00000001


MSI

Once this is done the TFS Integration Tools installation should work.

Accept the default options, you will need to select a SQL server for the tool to use as a database to store its progress. The installer will create a DB called tfs_integrationplatform on the SQL instance

Creating a Mappings File

TFS Integration platform needs a mapping file to work out which fields go where.

  1. We assume there is a local TFS server with the source to migrate from and a VSTS instance containing a team project using a reasonably compatible uncustomised process template
  2. Download the TFS Process Mapper and run it.
  3. You need to load into the process mapper the current work item configuration, the tools provides buttons to do this from XML files (exported with WITADMIN) or directly from the TFS/VSTS server.
  4. You should see a list of fields in both the source and target server definitions of the given work item type.
  5. Use the automap button to match the fields
  6. Any unmatch fields will be left on the left columns

    image
  7. Some field you may be match manually e.g. handing name changes from ‘Area ID’ to ‘AreadID’
  8. If you have local custom fields you can add matching fields on the VSTS instance, this is done using the process on MSDN.
  9. Once you have added your custom filed I have found it best to clear the mapping tool and re-import the VSTS work item definitions. The new fields appear in the list and can be mapped manually to their old equivalents.
  10. I now exported my mappings file.
  11. This process described above is the same as manually editing the mapping file in the form
    <MappedField MapFromSide="Left" LeftName="BM.Custom1" RightName="BMCustom1" />

    There is a good chance one of the fields you want is the old TFS servers work item. If you add the mapping as above for System.ID you would expect it to work. However, it does not the field is empty on the target system. I don’t think this is a bug, just an unexpected behaviour in the way the unique WI IDs are handled by the tool. As a workaround I found I had to also be an aggregate field to force the System.ID to be transferred. In my process customisation on VSTS I created an Integer OldId custom field. I then added the following to my mapping, it is important to note that I don’t use the  line in the mappedfields block, I used a AggregatedField. 
    <MappedFields>
             <-- all auto generated mapping stuff,
    This is where you would expect a line like the one below
             <MappedField MapFromSide="Left" LeftName="System.Id" RightName="OldID" /> –>
    </MappedFields>
    <AggregatedFields>
           <FieldsAggregationGroup MapFromSide="Left" TargetFieldName="OldID" Format="{0}">
               <SourceField Index="0" SourceFieldName="System.Id" valueMap=""/>
           </FieldsAggregationGroup>
    </AggregatedFields>
  12. I could now use my edited mappings file

Running TFS Integration Platform

I could now run the TFS Integration tools using the mappings file

  1. Load TFS Integration Platform
  2. Create a new configuration
  3. Select the option for work items with explicit mappings
  4. Select your source TFS server
  5. Select your target VSTS server
  6. Select the work item query that returns the items we wish to move
  7. Edit the mapping XML, but and past in the edited block from the previous section. Note that if you are moving multiple work item types then you will be combining a number of these mapping sections
  8. Save the mapping file, you are now ready to use it in TFS Integration Platform

 

And hopefully work migration will progress as you hope. It might take some trial and error but you should get there in the end.

But really……

This all said, I would still recommend just bring over the active work item backlog and current source when moving to VSTS. It a easier, faster and give you a chance to sort out structures without bringing in all your poor choices of the past.

Putting a release process around my VSTS extension development

I have been developing a few VSTS/TFS build related extensions and have published a few in the VSTS marketplace. This has all been a somewhat manual process, a mixture of Gulp and PowerShell has helped a bit, but I decided it was time to try to do a more formal approach. To do this I have used Jesse Houwing’s VSTS Extension Tasks.

Even with this set of tasks I am not sure what I have is ‘best practice’, but it does work. The doubt is due to the way the marketplace handles revisions and preview flags. What I have works for me, but ‘your mileage may differ’

My Workflow

The core of my workflow is that I am building the VSIX package twice, once as a private package and the other as a public one. They both contain the same code and have the same version number, they differ in only visibility flags

I am not using a the preview flag options at all. I have found they do not really help me. My workflow is to build the private package, upload it and test it by sharing it with a test VSTS instance. if all is good publish the matched public package on the marketplace. In this model there is no need to use a preview, it just adds complexity I don’t need.

This may not be true for everyone.

Build

The build’s job is to take the code, set the version number and package it into multiple VSIX package.

  1. First I have the vNext build get my source from my GitHub repo.
  2. I add two build variables $(Major) and $(Minor) that I use to manually manage my version number
  3. I set my build number format to $(Major).$(Minor).$(rev:r), so the final .number is incremented until I choose to increment the major or minor version.
  4. I then use one of Jesse’s tasks to package the extension multiple times using the extension tag model parameter. Each different package step uses different Visibility settings (circled in red). I also set the version, using the override options, to the $(Build.BuildNumber) (circled in green)

    image
  5. As I am using the VSTS hosted build agent I also need to make sure I check the install Tfx-cli in the global setting section
  6. I then add a second identical publish task, but this time there is no tag set and the visibility is set to public.
  7. Finally I use a ‘publish build artifacts’ task to copy the VSIX packages to a drop location

Release

So now I have multiple VSIX packages I can use the same family of tasks to create a release pipeline.

I create a new release linked to be a Continuous Deployment of the previously created build and set its release name format to Release-$(Build.BuildNumber)

My first environment uses three tasks, all using the option - to work from a VSIX package.

Note In all cases I am using the VSIX path in the format $(System.DefaultWorkingDirectory)/GenerateReleaseNotes.Master/vsix/<package name>-<tag>-$(Build.BuildNumber).vsix. I am including the build number variable in the path as I chose to put all the packages in a single folder, so path wildcards are not an option as the task would not know which package to use unless I alter my build to put one VSIX package per folder.

My tasks for the first environment are

  1. Publish VSTS Extension – using my private package so it is added as a private package to the marketplace
  2. Share VSTS Extension – to my test VSTS account
  3. Install VSTS Extension – to my test VSTS account

For details in the usage of these tasks and setting up the link to the VSTS Marketplace see Jesse’s wiki

If I only intend a extension to ever be private this is enough. However I want to make mine public so I add a second environment that has manual pre-approval (so I have to confirm the public release)

This environment only needs single task

  1. Publish VSTS Extension – using my public package so it is added as a public package to the marketplace

I can of course add other tasks to this environment maybe send a Tweet or email to publicise the new version’s release

Summary

So now I have a formal way to release my extensions. The dual packaging model means I can publish two different versions at the same time one privately and the other public

image

It is now just a case of moving all my extensions over to the new model.

Though I am still interested to hear what other people view are? Does this seem a reasonable process flow?

Updates to my StyleCop task for VSTS/TFS 2015.2

Tracking the current version of StyleCop is a bit awkward. Last week I got an automated email from CodePlex saying 4.7.52.0 had been released . I thought this was the most up to date version, so upgraded my StyleCop command line wrapper and my VSTS StyleCop task from 4.7.47.0 to 4.7.52.0.

However, I was wrong about the current version. I had not realised that the StyleCop team  had forked the code onto GitHub. GitHub is now the home of the Visual Studio 2015 and C# 6 development of StyleCop, while Codeplex remains the home of the legacy Visual Studio versions. I had only upgraded to a legacy patch version, not the current version.

So I upgraded my StyleCop Command Line tool and my VSTS StyleCop task to wrapper 4.7.59.0, thus I think bringing me up to date.

How to build a connection string from other parameters within MSDeploy packages to avoid repeating yourself in Release Management variables

Whilst working with the new Release Management features in VSTS/TFS 2015.2 I found I needed to pass in configuration variables i.e. server name, Db name, UID and Password to create a SQL server via an Azure Resource Management Template release step and a connection string to the same SQL instance for a web site’s web.config, set using an MSDeploy release step using token replacement (as discussed in this post)

Now I could just create RM configuration variables for both the connection string and ARM settings,

image

 

However, this seems wrong for a couple of reason

  1. You should not repeat your self, too easy to get the two values out of step
  2. I don’t really want to obfuscate the whole of a connection string in RM, when only a password really needs to be hidden (note the connection string variable is not set as secure in the above screenshot)

What did not work

I first considered nesting the RM variables, e.g. setting a the connection string variable to be equal to ‘Server=tcp: $(DatabaseServer).database.windows.net,1433;Database=$(DatabaseName)….’, but this does not give the desired results, the S(DatabaseServer) and $(DatabaseName) variables are not expanded at runtime, you just get a string with the variable names in it.

How I got want I was after….

(In this post as a sample I am using the Fabrikam Fiber solution. This means I need to provide a value for the FabrikamFiber-Express connection string)

I wanted to build the connection string from the other variables in the MSDeploy package. So to get the behaviour I want…

  1. In Visual Studio load the Fabrikam web site solution.
  2. In the web project, use the publish option to create a publish profile use the ‘WebDeploy package’ option.
  3. If you publish this package you end up with a setparameter.xml file containing the default connection string
    <setParameter name="FabrikamFiber-Express-Web.config Connection String" value="Your value”/>
    Where ‘your value’ is the value you set in the Publish wizard. So to use this I would need to pass in a whole connection string, where I only want to pass parts of this string
  4. To add bespoke parameters to an MSDeploy package you add a parameter.xml file to the project in Visual Studio (I wrote a Visual Studio Extension that help add this file, but you can create it by hand). My tool will create the parameters.xml file based on the AppSettings block of the projects Web.config. So if you have a web.config containing the following
    <appSettings>
        <add key="Location" value="DEVPC" />
      </appSettings>
    It will create a parameters.xml file as follows
    <?xml version="1.0" encoding="utf-8"?>
    <parameters>
      <parameter defaultValue="__LOCATION__" description="Description for Location" name="Location" tags="">
        <parameterentry kind="XmlFile" match="/configuration/appSettings/add[@key='Location']/@value" scope="\\web.config$" />
      </parameter>
    </parameters>
  5. If we publish at this point we will get a setparameters.xml file containing
    <?xml version="1.0" encoding="utf-8"?>
    <parameters>
      <setParameter name="IIS Web Application Name" value="__Sitename__" />
      <setParameter name="Location" value="__LOCATION__" />
      <setParameter name="FabrikamFiber-Express-Web.config Connection String" value="__FabrikamFiberWebContext__" />
    </parameters>
    This is assuming I used the publish wizard to set the site name to __SiteName__ and the DB connection string to __FabrikamFiberWebContext__
  6. Next step is to add my DB related parameters to the paramaters.xml file, this I do by hand, my tool does not help
    <?xml version="1.0" encoding="utf-8"?>
    <parameters>
      <parameter defaultValue="__LOCATION__" description="Description for Location" name="Location" tags="">
        <parameterentry kind="XmlFile" match="/configuration/appSettings/add[@key='Location']/@value" scope="\\web.config$" />
      </parameter>

      <parameter name="Database Server" defaultValue="__sqlservername__"></parameter>
      <parameter name="Database Name" defaultValue="__databasename__"></parameter>
      <parameter name="Database User" defaultValue="__SQLUser__"></parameter>
      <parameter name="Database Password" defaultValue="__SQLPassword__"></parameter>
    </parameters>
  7. If I publish again, this time the new variables also appear in the setparameters .xml file
  8. Now I need to supress the auto generated creation of the connection string  parameter, and replace it with a parameter that uses the other parameters to generate the connection string. You would think this was a case of added more text to the parameters.xml file, but that does not work. If you add the block you would expect (making sure the name matches the auto generated connection string name) as below
    <parameter 
      defaultValue="Server=tcp:{Database Server}.database.windows.net,1433;Database={Database Name};User ID={Database User}@{Database Server};Password={Database Password};Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
      description="Enter the value for FabrikamFiber-Express connection string"
      name="FabrikamFiber-Express-Web.config Connection String"
      tags="">
      <parameterentry
        kind="XmlFile"
        match="/configuration/connectionStrings/add[@name='FabrikamFiber-Express']/@connectionString"
        scope="\\web.config$" />
    </parameter>

    It does add the entry to setparameters.xml, but this blocks the successful operations at deployment. It seems that if a value needs to be generated from other variables there can be no entry for it in the setparameters.xml. Documentation hints you can set the Tag to ‘Hidden’ but this does not appear to work.

    One option would be to let the setparameters.xml file be generated and then remove the offending line prior to deployment but this feels wrong and prone to human error
  9. To get around this you need to added a file name <projectname>.wpp.target to the same folder as the project (and add it to the project). In this file place the following
    <?xml version="1.0" encoding="utf-8"?>
    <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <Target Name="DeclareCustomParameters"
              BeforeTargets="Package">
        <ItemGroup>
          <MsDeployDeclareParameters Include="FabrikamFiber-Express">
            <Kind>XmlFile</Kind>
            <Scope>Web.config</Scope>
            <Match>/configuration/connectionStrings/add[@name='FabrikamFiber-Express']/@connectionString</Match>
            <Description>Enter the value for FabrikamFiber-Express connection string</Description>
            <DefaultValue>Server=tcp:{Database Server}.database.windows.net,1433;Database={Database Name};User ID={Database User}@{Database Server};Password={Database Password};Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;</DefaultValue>
            <Tags></Tags>
            <ExcludeFromSetParameter>True</ExcludeFromSetParameter>
          </MsDeployDeclareParameters>
        </ItemGroup>
      </Target>
      <PropertyGroup>
        <AutoParameterizationWebConfigConnectionStrings>false</AutoParameterizationWebConfigConnectionStrings>
      </PropertyGroup>
    </Project>

    The first block declares the parameter I wish to use to build the connection string. Note the ‘ExcludeFromSetParameter’ setting so this parameter is not in the setparameters.xml file. This is what you cannot set in the parameters.xml

    The second block stops the auto generation of the connection string. (Thanks to Sayed Ibrahim Hashimi for various posts on getting this working)
  10. Once the edits are made unload and reload the project as the <project>. wpp.targets file is cached on loading by Visual Studio.
  11. Make sure the publish profile is not set to generate a connection string

    image
  12. Now when you publish the project, you should get a setparameters.xml file with only the four  SQL variables, the AppSettings variables and the site name.
    (Note I have set the values for all of these to the format  __NAME__, this is so I can use token replacement in  my release pipeline)
    <?xml version="1.0" encoding="utf-8"?>
    <parameters>
      <setParameter name="IIS Web Application Name" value="__Sitename__" />
      <setParameter name="Location" value="__LOCATION__" />
      <setParameter name="Database Server" value="__sqlservername__" />
      <setParameter name="Database Name" value="__databasename__" />
      <setParameter name="Database User" value="__SQLUser__" />
      <setParameter name="Database Password" value="__SQLPassword__" />
    </parameters>
  13. If you deploy the web site, the web.config should have your values from the setparameters.xml file in it
    <appSettings>
       <add key="Location" value="__LOCATION__" />
    </appSettings>
    <connectionStrings>
         <add name="FabrikamFiber-Express" connectionString="Server=tcp:__sqlservername__.database.windows.net,1433;Database=__databasename__;User ID=__SQLUser__@__sqlservername__;Password=__SQLPassword__;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;" providerName="System.Data.SqlClient" />
    </connectionStrings>

You are now in a position manage the values of the setparameters.xml file however you wish. My choice is to use the ‘Replace Tokens’ build/release tasks from Colin’s ALM Corner Build & Release Tools Extension, as this tasks correctly handles secure/encrypted RM variables as long as you use the ‘Secret Tokens’ option on the advanced menu.

image



 

Summary

So yes, it all seems a but too complex, but it does work, and I think it makes for a cleaner deployment solution, less prone to human error. Which is what any DevOps solution must always strive for.

Depending on the values you put in the <project>.wpp.targets you can parameterise the connection string however you need.

In place upgrade times from TFS 2013 to 2015

There is no easy way to work out how long a TFS in place upgrade will take, there are just too many factors to make any calculation reasonable

  • Start and end TFS version
  • Quality/Speed of hardware
  • Volume of source code
  • Volume of work items
  • Volume of work item attachments
  • The list goes on….

The best option I have found to a graph various upgrades I have done and try to make an estimate based in the shape of the curve. I did this for 2010 > 2013 upgrades, and now I think I have enough data from upgrades of sizable TFS instances to do the same for 2013 to 2015.

image

 

Note: I extracted this data from the TFS logs using the script in this blog post it is also in my git repo 

So as a rule of thumb, the upgrade process will pause around step 100 (the exact number varies depending on your starting 2013.x release), time this pause, and expect the upgrade to complete in about 10x this period.

It is not 100% accurate, but close enough so you know how long to go for a coffee/meal/pub or bed for the night

Using MSDeploy to deploy to nested virtual applications in Azure Web Apps

Azure provides many ways to scale and structure web site and virtual applications. I recently needed to deploy the following structure where each service endpoint was its own Visual Studio Web Application Project built as a MSDeploy Package

  • http://demo.azurewebsites.net/api/service1
  • http://demo.azurewebsites.net/api/service2
  • http://demo.azurewebsites.net/api/service3

To do this in the Azure Portal in …

  1. Created a Web App for the site http://demo.azurewebsites.net This pointed to the disk location site\wwwoot, I disabled the folder as an application as there is not application running at this level 
  2. Created a virtual directory api point to \site\wwroot\api, again disabling this folder as an application 
  3. Created a virtual application for each of my services, each with their own folder

image

I knew from past experience I could use MSDeploy to deploy to the root site or the api virtual directory. However I found when I tried to deploy to any of the service virtual applications I got an error that the web site could not be created. Now I would not expect MSDEPLOY to create a directory so I knew something was wrong at the Azure end.

The fix in the end was simple, it seems the folder service folders e.g \site\wwwroot\api\service1 had not been created by the Azure Portal when I created the virtual directory. I FTP’d onto the web application and create the folder \site\wwwroot\api\service1  once this was done MSDEPlOY worked perfectly, and I could build the structure I wanted. 

Running Pester PowerShell tests in the VSTS hosted build service

Updated 22 Mar 2016 This task is available in the VSTS Marketplace

If you are using Pester to unit test your PowerShell code then there is a good chance you will want to include it in your automated build process. To do this, you need to get Pester installed on your build machine. The usual options would be

If you own the build agent VM then any of these options are good, you can even write the NuGet restore into your build process itself. However there is a problem, both the first two options need administrative access as they put the Pester module in the $PSModules folder (under ‘Program Files’); so these can’t be used on VSTS’s hosted build system, where your are not an administrator

So this means you are left with copying the module (and associated functions folder) to some local working folder and running it manually; but do you really want to have to store the Pester module in your source repo?

My solution was to write a vNext build tasks to deploy the Pester files and run the Pester tests.

image_thumb[12]

The task takes two parameters

  • The root folder to look for test scripts with the naming convention  *.tests.ps1. Defaults to $(Build.SourcesDirectory)\*
  • The results file name, defaults to $(Build.SourcesDirectory)\Test-Pester.XML

The Pester task does not in itself upload the test results, it just throws and error if tests fails. It relies on the standard test results upload task. Add this task and set

  • it to look for nUnit format files
  • it already defaults to the correct file name pattern.
  • IMPORTANT: As the Pester task will stop the build on an error you need to set the ‘Always run’ to make sure the results are published.

image_thumb[11]

Once all this is added to your build you can see your Pester test results in the build summary

image_thumb[10]

image_thumb[14]

You can find the task in my vNextBuild repo

A vNext build task to get artifacts from a different TFS server

With the advent of TFS 2015.2 RC (and the associated VSTS release) we have seen the short term removal of the ‘External TFS Build’ option for the Release Management artifacts source. This causes me a bit of a problem as I wanted to try out the new on premises vNext based Release Management features on 2015.2, but don’t want to place the RC on my production server (though there is go live support). Also the ability to get artifacts from an on premises TFS instance when using VSTS open up a number of scenarios, something I know some of my clients had been investigating.

To get around this blocker I have written a vNext build task that does the getting of a build artifact from the UNC drop. It supports both XAML and vNext builds. Thus replacing the built in artifact linking features.

Usage

To use the new task

  • Get the task from my vNextBuild repo (build using the instructions on the repo’s wiki) and install it on your TFS 2015.2 instance (also use the notes on the repo’s wiki).
  • In your build, disable the auto getting of the artifacts for the environment (though in some scenarios you might choose to use both the built in linking and my custom task)

image

  • Add the new task to your environment’s release process, the parameters are
    • TFS Uri – the Uri of the TFS server inc. The TPC name
    • Team Project – the project containing the source build
    • Build Definition name – name of the build (can be XAML or vNext)
    • Artifact name – the name of the build artifact (seems to be ‘drop’ if a XAML build)
    • Build Number – default is to get the latest successful completed build, but you can pass a specific build number
    • Username/Password – if you don’t want to use default credentials (the user the build agent is running as), these are the ones used. These are passed as ‘basic auth’ so can be used against an on prem TFS (if basic auth is enabled in IIS)  or VSTS (with alternate credentials enabled).

image

 

When the task runs it should drop artifacts in the same location as the standard mechanism, so can be picked up by any other tasks on the release pipeline using a path similar to $(System.DefaultWorkingDirectory)\SABS.Master.CI\drop

Limitations

The task in its current form does not provide any linking of artifacts to the build reports, or allow the selection of build versions when the release is created. This removing audit trail features.

However, it does provide a means to get a pair of TFS servers working together, so can certainly enable some R&D scenarios while we await 2015.2 to RTM and/or the ‘official’ linking of External TFS builds as artifacts

Repost: What I learnt extending my VSTS Release Process to on-premises Lab Management Network Isolated Environments

This a a repost of a guest article first posted on the Microsoft UK Developers Blog: How to extend a VSTS release process to on-premises

Note that since I write the original post there have been some changes on VSTS and the release to TFS 2015.2 RC1. These mean there is no longer an option to pull build artifacts from the an external TFS server as part of a release; so invalidating some of the options this post discusses. I have struck out the outdated sections. The rest of the post is still valid, especially the section on where to update configuration settings. The release of TFS 2015.2 RC1 actually makes many of options easier as you don’t have to bridge between on premises TFS and VSTS as both build and release features are on the same server.


 

Background

Visual Studio Team Services (VSTS) provides a completely new version of Release Management, replacing the version shipped with TFS 2013/2015. This new system is based on the same cross platform agent model as the new vNext build system shipped with TFS 2015 (and also available on VSTS). At present this new Release Management system is only available on VSTS, but the features timeline suggest we should see it on-premises in the upcoming update 2015.2.

You might immediately think that as this feature is only available in VSTS at present, that you cannot use this new release management system with on-premises services, but this would not be true. The Release Management team have provided an excellent blog post on running an agent connected to your VSTS instance inside your on-premises network to enable hybrid scenarios.

This works well for deploying to domain connected targets, especially if you are using Azure Active Directory Sync to sync your corporate domain and AAD to provide a directory backed VSTS instance. In this case you can use a single corporate domain account to connect to VSTS and to the domain services you wish to deploy to from the on-premises agent.

However, I make extensive use of TFS Lab Management to provide isolated dev/test environments (linked to an on-premises TFS 2015.1 instance). If I want to deploy to these VMs it adds complexity in how I need to manage authentication; as I don’t want to have to place a VSTS build agent in each transiently created dev/test lab. One because it is complex and two because there is a cost to having more than one self provisioned vNext build agent.

It is fair to say that deploying to an on-premises Lab Management environment from a VSTS instance is an edge case, but the same basic process will be needed when the new Release Management features become available on-premises.

Now, I would be the first to say that there is a good case to look at a move away from Lab Management to using Azure Dev Labs which are currently in preview, but Dev Labs needs fuller Azure Resource Manager support before we can replicate the network isolated Lab Management environments I need.

The Example

So at this time, I still need to be able to use the new Release Management with my current Lab Management network isolated labs, but this raises some issues of authentication and just what is running where. So let us work through an example; say I want to deploy a SQL DB via a DACPAC and a web site via MSDeploy on the infrastructure shown below.

 

image

Both the target SQL and Web servers live inside the Lab Management isolated network on the proj.local domain, but have DHCP assigned addresses on the corporate LAN in the form vslm-[guid].corp.com (managed by Lab Management), so I can access them from the build agent with appropriate credentials (a login for the proj.local domain within the network isolated lab).

The first step is to install a VSTS build agent linked to my VSTS instance, once this is done we can start to create our release pipeline. The first stage is to get the artifacts we need to deploy i.e. the output of builds. These could be XAML or vNext build on the VSTS instance, or from the on-premises TFS instance or a Jenkins build. Remember a single release can deploy any number of artifacts (builds) e.g. the output of a number of builds. It is this fact that makes this setup not as strange as it initially appears. We are just using VSTS Release Management to orchestrate a deployment to on-premises systems.

The problem we have is that though our release now has artifacts, we now need to run some commands on the VM running the vNext Build Agent to do the actual deployment. VSTS provides a number of deployment tasks to help in this area. Unfortunately, at the time of writing, the list of deployment tasks in VSTS are somewhat Azure focused, so not that much use to me.

clip_image004

This will change over time as more tasks get released, you can see what is being developed on the VSO Agent Task GitHub Repo (and of course you could install versions from this repo if you wish).

So for now I need to use my own scripts, as we are on a Windows based system (not Linux or Mac) this means some PowerShell scripts.

The next choice becomes ‘do I run the script on the Build Agent VM or remotely on the target VM’ (within the network isolated environment). The answer is the age-old consultants answer ‘it depends’. In the case of both DACPAC and MSDeploy deployments, there is the option to do remote deployment i.e. run the deployment command on the Build Agent VM and it remotely connects to the target VMs in the network isolated environment. The problem with this way of working is that I would need to open more ports on the SQL and Web VMs to allow the remote connections; I did not want to do this.

The alternative is to use PowerShell remoting, in this model I trigger the script on the Build Agent VM, but it uses PowerShell remoting to run the command on the target VM. For this I only need to enable remote PowerShell on the target VMs, this is done by running the following command and follow prompts on each target VM to set up the required services and open the correct ports on the target VMs firewall.

winrm -qc 

This is something we are starting to do as standard to allow remote management via PowerShell on all our VMs.

So at this point it all seems fairly straight forward, run a couple of remote PowerShell scripts and all is good, but no. There is a problem.

A key feature of Release Management is that you can provide different configurations for different environments e.g. the DB connection string is different for the QA lab as opposed to production. These values are stored securely in Release Management and applied as needed.

clip_image006

The way these variables are presented is as environment variables on the Build Agent VM, hence they can accessed from PowerShell in the form env:$__DOMAIN__. IT IS IMPORTANT TO REMEMBER that they are not presented on any target VMs in the isolated lab network environment, or to these VMs via PowerShell remoting.

So if we are intending to use remote PowerShell execution for our deployments we can’t just access settings environment variables as part of the scripts being run remotely; we would have to pass the environment variable in as PowerShell command line arguments.

This works OK for the DACPAC deployment as we only need to pass in a few, fixed arguments e.g. The PowerShell script arguments when passing the arguments for the package name, target server and DB name using the Release Management variables in their $(variable) form become:

-DBPackage $(DBPACKAGE) -TarhegDBName $(TARGETDDBNAME) –TargetServer $(TARGETSERVERNAME)

However, for the MSDeploy deploy there is no simple fixed list of parameters. This is because as well as parameters like package names, we need to modify the setparameters.xml file at deployment time to inject values for our web.config from the release management system.

The solution I have adopted is do not try to pass this potentially long list of arguments into a script to be run remotely, the command line argument just becomes hard to edit without making errors, and needs to be updated each time we add an extra variable.

The alternative is to update the setparameters.xml file on the Build Agent VM before we attempt to run it remotely. To this end I have written a custom build task to handle the process which can found on my GitHub repo. This updates a named setparameters.xml file using token replacement based on environment variables set by Release Management. If you would rather automatically find a number of setparmeters.xml files using wildcards (because you are deploying many sites/services) and update them all with a single set of tokens, have a look at Colin Dembovsky’s build task which does just that.

So given this technique my release steps become:

1. Get the artifacts from the builds to the Build Agent VM.

2. Update the setparameters.xml file using environment variables on the Build Agent VM.

3. Copy the downloaded (and modified) artifacts to all the target machines in the environment.

4. On the SQL VM run the sqlpackage.exe command to deploy the DACPAC using remote PowerShell execution.

5. On the Web VM run the MSDeploy command using remote PowerShell execution.

clip_image008

The PowerShell I run in the final two tasks are just simple wrappers around the underlying commands. The key fact is that because they are scripts it allows remote execution. The targeting of the execution is done by associating each task with a target machine group, and filtering either by name or in my case role, to target specific VMs.

clip_image010

In my machine group I have defined both my SQL and Web VMs using the names on the corporate LAN. Assigning a role to each to make targeting easier. Note that it is here, in the machine group definition, that I provide the credentials required to access the VMs in my Network Isolated environment i.e. a proj.local set of credentials.

clip_image012.

Once I get all these settings in place I am able to build a product on my VSTS build system (or my on-premises TFS instance) and using this VSTS connected, but on-premises located; Build Agent deploy my DB and web site to a Lab Management network isolated test environment.

There is no reason why I cannot add more tasks to this release pipeline to perform more actions such as run tests (remember the network isolated environment already has TFS Test Agents installed, but they are pointing to the on-premises TFS instance) or to deploy to other environments.

Summary

As I said before, this is an edge case, but I hope it shows how flexible the new build and release systems can be for both TFS and VSTS.

Release Manager 2015 stalls at the ‘uploading components’ step and error log shows XML load errors

Whilst seting up a Release Management 2015.1 server we came across a strange problem. The installation appears to go OK. We were able to install the server and from the client created a simple vNext release pipeline and run it. However, the release stalled on the ‘Upload Components’ step.

Looking in event log of the VM running the Release Management server we could see many many errors all complaining about invalid XML, all in the general form

 

Message: Object reference not set to an instance of an object.: \r\n\r\n   at Microsoft.TeamFoundation.Release.Data.Model.SystemSettings.LoadXml(Int32 id)

 

Note: The assembly that it complaining about varied, but all Release Management Deploayer related.

We tried a reinstall on a new server VM, but got the same results.

Turns out issue was due to the service account that the Release Management server was running as; this was the only thing common between the two server VM instances. We swapped to use ‘Network Server’ and everything lept into life. All we could assume was that some group policy or similar settings on the service account was placing some restriction on assembly or assembly config file loading.