Repost: What I learnt extending my VSTS Release Process to on-premises Lab Management Network Isolated Environments
This a a repost of a guest article first posted on the Microsoft UK Developers Blog: How to extend a VSTS release process to on-premises
Note that since I write the original post there have been some changes on VSTS and the release to TFS 2015.2 RC1. These mean there is no longer an option to pull build artifacts from the an external TFS server as part of a release; so invalidating some of the options this post discusses. I have struck out the outdated sections. The rest of the post is still valid, especially the section on where to update configuration settings. The release of TFS 2015.2 RC1 actually makes many of options easier as you don’t have to bridge between on premises TFS and VSTS as both build and release features are on the same server.
Background
Visual Studio Team Services (VSTS) provides a completely new version of Release Management, replacing the version shipped with TFS 2013/2015. This new system is based on the same cross platform agent model as the new vNext build system shipped with TFS 2015 (and also available on VSTS). At present this new Release Management system is only available on VSTS, but the features timeline suggest we should see it on-premises in the upcoming update 2015.2.
You might immediately think that as this feature is only available in VSTS at present, that you cannot use this new release management system with on-premises services, but this would not be true. The Release Management team have provided an excellent blog post on running an agent connected to your VSTS instance inside your on-premises network to enable hybrid scenarios.
This works well for deploying to domain connected targets, especially if you are using Azure Active Directory Sync to sync your corporate domain and AAD to provide a directory backed VSTS instance. In this case you can use a single corporate domain account to connect to VSTS and to the domain services you wish to deploy to from the on-premises agent.
However, I make extensive use of TFS Lab Management to provide isolated dev/test environments (linked to an on-premises TFS 2015.1 instance). If I want to deploy to these VMs it adds complexity in how I need to manage authentication; as I don’t want to have to place a VSTS build agent in each transiently created dev/test lab. One because it is complex and two because there is a cost to having more than one self provisioned vNext build agent.
It is fair to say that deploying to an on-premises Lab Management environment from a VSTS instance is an edge case, but the same basic process will be needed when the new Release Management features become available on-premises.
Now, I would be the first to say that there is a good case to look at a move away from Lab Management to using Azure Dev Labs which are currently in preview, but Dev Labs needs fuller Azure Resource Manager support before we can replicate the network isolated Lab Management environments I need.
The Example
So at this time, I still need to be able to use the new Release Management with my current Lab Management network isolated labs, but this raises some issues of authentication and just what is running where. So let us work through an example; say I want to deploy a SQL DB via a DACPAC and a web site via MSDeploy on the infrastructure shown below.
Both the target SQL and Web servers live inside the Lab Management isolated network on the proj.local domain, but have DHCP assigned addresses on the corporate LAN in the form vslm-[guid].corp.com (managed by Lab Management), so I can access them from the build agent with appropriate credentials (a login for the proj.local domain within the network isolated lab).
The first step is to install a VSTS build agent linked to my VSTS instance, once this is done we can start to create our release pipeline. The first stage is to get the artifacts we need to deploy i.e. the output of builds. These could be XAML or vNext build on the VSTS instance, or from the on-premises TFS instance or a Jenkins build. Remember a single release can deploy any number of artifacts (builds) e.g. the output of a number of builds. It is this fact that makes this setup not as strange as it initially appears. We are just using VSTS Release Management to orchestrate a deployment to on-premises systems.
The problem we have is that though our release now has artifacts, we now need to run some commands on the VM running the vNext Build Agent to do the actual deployment. VSTS provides a number of deployment tasks to help in this area. Unfortunately, at the time of writing, the list of deployment tasks in VSTS are somewhat Azure focused, so not that much use to me.
This will change over time as more tasks get released, you can see what is being developed on the VSO Agent Task GitHub Repo (and of course you could install versions from this repo if you wish).
So for now I need to use my own scripts, as we are on a Windows based system (not Linux or Mac) this means some PowerShell scripts.
The next choice becomes ‘do I run the script on the Build Agent VM or remotely on the target VM’ (within the network isolated environment). The answer is the age-old consultants answer ‘it depends’. In the case of both DACPAC and MSDeploy deployments, there is the option to do remote deployment i.e. run the deployment command on the Build Agent VM and it remotely connects to the target VMs in the network isolated environment. The problem with this way of working is that I would need to open more ports on the SQL and Web VMs to allow the remote connections; I did not want to do this.
The alternative is to use PowerShell remoting, in this model I trigger the script on the Build Agent VM, but it uses PowerShell remoting to run the command on the target VM. For this I only need to enable remote PowerShell on the target VMs, this is done by running the following command and follow prompts on each target VM to set up the required services and open the correct ports on the target VMs firewall.
1winrm -qc
This is something we are starting to do as standard to allow remote management via PowerShell on all our VMs.
So at this point it all seems fairly straight forward, run a couple of remote PowerShell scripts and all is good, but no. There is a problem.
A key feature of Release Management is that you can provide different configurations for different environments e.g. the DB connection string is different for the QA lab as opposed to production. These values are stored securely in Release Management and applied as needed.
The way these variables are presented is as environment variables on the Build Agent VM, hence they can accessed from PowerShell in the form env:$__DOMAIN__. IT IS IMPORTANT TO REMEMBER that they are not presented on any target VMs in the isolated lab network environment, or to these VMs via PowerShell remoting.
So if we are intending to use remote PowerShell execution for our deployments we can’t just access settings environment variables as part of the scripts being run remotely; we would have to pass the environment variable in as PowerShell command line arguments.
This works OK for the DACPAC deployment as we only need to pass in a few, fixed arguments e.g. The PowerShell script arguments when passing the arguments for the package name, target server and DB name using the Release Management variables in their $(variable) form become:
1\-DBPackage $(DBPACKAGE) -TarhegDBName $(TARGETDDBNAME) –TargetServer $(TARGETSERVERNAME)
However, for the MSDeploy deploy there is no simple fixed list of parameters. This is because as well as parameters like package names, we need to modify the setparameters.xml file at deployment time to inject values for our web.config from the release management system.
The solution I have adopted is do not try to pass this potentially long list of arguments into a script to be run remotely, the command line argument just becomes hard to edit without making errors, and needs to be updated each time we add an extra variable.
The alternative is to update the setparameters.xml file on the Build Agent VM before we attempt to run it remotely. To this end I have written a custom build task to handle the process which can found on my GitHub repo. This updates a named setparameters.xml file using token replacement based on environment variables set by Release Management. If you would rather automatically find a number of setparmeters.xml files using wildcards (because you are deploying many sites/services) and update them all with a single set of tokens, have a look at Colin Dembovsky’s build task which does just that.
So given this technique my release steps become:
1. Get the artifacts from the builds to the Build Agent VM.
2. Update the setparameters.xml file using environment variables on the Build Agent VM.
3. Copy the downloaded (and modified) artifacts to all the target machines in the environment.
4. On the SQL VM run the sqlpackage.exe command to deploy the DACPAC using remote PowerShell execution.
5. On the Web VM run the MSDeploy command using remote PowerShell execution.
The PowerShell I run in the final two tasks are just simple wrappers around the underlying commands. The key fact is that because they are scripts it allows remote execution. The targeting of the execution is done by associating each task with a target machine group, and filtering either by name or in my case role, to target specific VMs.
In my machine group I have defined both my SQL and Web VMs using the names on the corporate LAN. Assigning a role to each to make targeting easier. Note that it is here, in the machine group definition, that I provide the credentials required to access the VMs in my Network Isolated environment i.e. a proj.local set of credentials.
Once I get all these settings in place I am able to build a product on my VSTS build system (or my on-premises TFS instance) and using this VSTS connected, but on-premises located; Build Agent deploy my DB and web site to a Lab Management network isolated test environment.
There is no reason why I cannot add more tasks to this release pipeline to perform more actions such as run tests (remember the network isolated environment already has TFS Test Agents installed, but they are pointing to the on-premises TFS instance) or to deploy to other environments.
Summary
As I said before, this is an edge case, but I hope it shows how flexible the new build and release systems can be for both TFS and VSTS.