Building VMs for use in TFS Lab Management environments

We have recently gone through an exercise to provide a consistent set of prebuilt configured VMs for use in our TFS Lab Management environments.

This is not an insignificant piece of work as this post by Rik Hepworth discusses detailing all the IT pro work he had to do to create them. This is all before we even think about the work required to create deployment TFS builds and the like.

It is well worth a read if you are planning to provision a library of VM for Lab Management as it has some really useful tips and tricks

Why is my TFS Lab build picking a really old build to deploy?

I was working on a TFS lab deployment today and to speed up testing I set it to pick the <Latest> build of the TFS build that actually builds the code, as opposed to queuing a new one each time. It took me ages to remember/realise why it kept trying to deploy some ancient build that had long since been deleted from my test system.

image

The reason was the <Latest> build means the last successful build, not the last partially successful build. I had a problem with my code build that means a test was failing (a custom build activity on my build agent was the root cause if the issue). Once I fixed my build box so the code build did not fail the lab build was able to pickup the files I expected and deploy them

Note that if you tell the lab deploy build to queue it’s own build it all attempt to deploy this even if it is partially successful.

Fix for ‘Cannot install test agent on these machines because another environment is being created using the same machines’

I recently posted about adding extra VMs to existing environments in Lab Management. Whilst following this process I hit a problem, I could not create my new environment there was a problem at the verify stage. It was fine adding the new VMs, but the one I was reusing gave the error ‘Microsoft test manager cannot install test agent on these machines because another environment is being created using the same machines’

image

II had seen this issue before and so I tried a variety of things that had sorted it in the past, removing the TFS Agent on the VM, manually installing and trying to configure them, reading through the Test Controller logs, all to no effect. I eventually got a solution today with the help of Microsoft.

The answer was to do the following on the VM showing the problem

  1. Kill TestAgentInstaller.exe process, if running on failing machine
  2. Delete “TestAgentInstaller” service from services, using sc delete testagentinstaller command (gotcha here, use a DOS style command prompt not PowerShell as sc has a different default meaning in PowerShell, it is an alias for set-content. if using PowerShell you need the full path to the sc.exe)
  3. Delete c:\Windows\VSTLM_RES folder
  4. Restart machine and then try Lab Environment creation again and all should be OK
  5. As usual once the environment is created you might need to restart it to get all the test agents to link up to the controller OK

So it seems that the removal of the VM from its old environment left some debris that was confusing the verify step. Seems this only happens rarely but can be a bit of a show stopper if you can’t get around it

Adding another VM to a running Lab Management environment

If you are using network isolated environment in TFS Lab management there is no way to add another VM unless you rebuild and redeploy the environment. However, if you are not network isolated you can at least avoid the redeploy issues to a degree.

I had a SCVMM based environment that was a not network isolated environment that contained a single non-domain joined server. This was used to host a backend simulation service for a project. In the next phase of the project we need to test accessing this service via RDP/Terminal Server so I wanted to add a VM to act in this role to the environment.

So firstly I deleted the environment in MTM, as the VMs in the environment are not network isolated they are not removed. The only change is to remove the XML meta data from the properties description.

I now needed to create my new VM. I had thought I could create a new environment adding the existing deployed and running VM as well as  a new one from the SCVMM library. However you get the error ‘ cannot create an environment consisting of both running and stored VMs’

image

So here you have two options.

  1. Store the running VM in the library and redeploy
  2. Deploy out, via SCVMM, a new VM from some template or stored VM

Once this is done you can create the new environment using the running VMs or stored images depending on the option chosen in the previous step.

So not any huge saving in time or effort. Just wish there was a way to edit deployed environments

Using SYSPREP’d VM images as opposed to Templates in a new TFS 2012 Lab Management Environment

An interesting change with Lab Management 2012 and SCVMM 2012 is that templates become a lot less useful. In the SCVMM 2008 versions you had a choice when you stored VMs in the SCVMM library. …

  • You could store a fully configured VM
  • or a generalised template.

When you added the template to a new environment you could enter details such as the machine name, domain to join and product key etc. If you try this with SCVMM 2012 you just see the message ‘These properties cannot be edited from Microsoft Test Manager’

image

So you are meant to use SCVMM to manage everything about the templates, not great if you want to do everything from MTM. However, is that the only solution?

An alternative is to store a SYSPREP’d VM as a Virtual Machine in the SCVMM library. This VM can be added as many times as is required to an environment (though if added more than once you are asked if you are sure)

image

This method does however bring problems of its own. When the environment is started, assuming it is network isolated, the second network adaptor is added as expected. However, as there is no agent on the VM it cannot be configured, usually for a template Lab Management would sort all this out, but because the VM is SYSPREP’d it is left sitting at the mini setup ‘Pick your region’ screen.

You need to manually configure the VM. So the best process I have found is

  1. Create the environment with you standard VMs and the SYSPRED’d one
  2. Boot the environment, the standard ready to use VMs get configured OK
  3. Manually connect to the SYSPREP’d VM and complete the mini setup. You will now have a PC on a workgroup
  4. The PC will have two network adapters, neither connected to you corporate network, both are connected to the network isolated virtual LAN. You have a choice
    • Connect the legacy adaptor to your corporate LAN, to get at a network share via SCVMM
    • Mount the TFS Test Agent ISO
  5. Either way you need to manually install the Test Agent and run the configuration (just select the defaults it should know where the test controller is). This will configure network isolated adaptor to the 192.168.23.x network
  6. Now you can manually join the isolated domain
  7. A reboot the VM (or the environment) and all should be OK

All a bit long winded, but does mean it is easier to build generalised VMs from MTM without having to play around in SCVMM too much. 

I think all would be a good deal easier of the VM had the agents on it before the SYSPREP, I have not tried this yet, but that is true in my option of all VMs used for Lab Management. Get the agents on early as you can, just speeds everything up.

Thanks for attending my webinar on lab management

Thanks to everyone who attended my webinar session today in TFS Lab Management, I hope the audio issues were not too much of a problem and you found the session useful. We are looking into the causes of the audio problem so hopefully the next webinar will not need people dialling in via the phone unless they want to.

A number of people asked for the slides, you can find a copy of them here.

As I mentioned in the session if you want a look at Lab Management you can have a go yourself using the HOL in Brian Keller’s TFS 2012 VM. Or watch the video I did at Techday 2010, an end to end demo of Lab Management.

I also mentioned a couple of Microsoft case studies that might be of interest

What machine name is being used when you compose an environment from running VMs in Lab Management?

This is a follow up to my older post on a similar subject 

When composing a new Lab Environment from running VMs the PC you are running MTM on needs to be able to connect to the running VMs. It does this using IP so at the most basic level you need to be able to resolve the name of the VM to an IP address.

If your VM is connected to the same LAN as your PC, but not in the same domain the chances are that DNS name resolution will not work. I find the best option is to put a temporary entry in your local hosts file, keeping it for just as long as the creation process takes.

But what should this entry be? Should it be the name of the VM as it appears in the MTM new environment wizard?

Turns out the answer is no, it needs to be the name as appears in the SC-VMM console

image

So the hosts table contains the correct entries for the FQDN (watch out for typo’s here, a mistype IP address only adds to the confusion) e.g.

10.10.10.100 wyfrswin7.wyfrs.local
10.10.10.45 shamrockbay.wyfrs.local

Once all this is set then just follow the process in my older post to enable the connection so the new environment wizard can verify OK.

Remember the firewall on the VMs may also be an issue. Just for the period of the environment creation I often disable this.

Also Wireshark is your friend, it will show if the machine you think is responding is the one you really want.

Lab Management with SCVMM 2012 and /labenvironmentplacementpolicy:aggressive

I did a post a year or so ago about setting up TFS Labs and mentioned command

C:\Program Files\Microsoft Team Foundation Server 2010\Tools>tfsconfig lab /hostgroup /collectionName:myTpc  ?/labenvironmentplacementpolicy:aggressive /edit /Name:"My hosts group"

This can be used tell TFS Lab Management to place VMs using any memory that is assigned stopped environments. This allowed a degree of over commitment of resources.

As I discovered today this command only works for SCVMM 2010 based system. if you try it you just get a message saying not support on SCVMM 2012. There appears to be no equivalent for 2012.

However you can use features such as dynamic memory with in SCVMM 2012 so all is not lost

Installing a DB from a DACPAC using Powershell as part of TFS Lab Management deployment

I have been battling setting up a DB deployed via the SQL 2012 DAC tools and Powershell.  My environment was a network isolated pair of machines

  • DC – the domain controller and SQL 2012 server
  • IIS – A web front end

As this is network isolated I could only run scripts on the IIS server, so my DB deploy needed to be remote. So the script I ended up with was

param(
    [string]$sqlserver = $( throw "Missing: parameter sqlserver"),
    [string]$dacpac = $( throw "Missing: parameter dacpac"),
    [string]$dbname = $( throw "Missing: parameter dbname") )

Write-Host "Deploying the DB with the following settings"
Write-Host "sqlserver:   $sqlserver"
Write-Host "dacpac: $dacpac"
Write-Host "dbname: $dbname"

# load in DAC DLL (requires config file to support .NET 4.0)
# change file location for a 32-bit OS
add-type -path "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll"

# make DacServices object, needs a connection string
$d = new-object Microsoft.SqlServer.Dac.DacServices "server=$sqlserver"

# register events, if you want ’em
register-objectevent -in $d -eventname Message -source "msg" -action { out-host -in $Event.SourceArgs[1].Message.Message } | Out-Null

# Load dacpac from file & deploy to database named pubsnew
$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($dacpac)
$d.Deploy($dp, $dbname, $true) # the true is to allow an upgrade, could be parameterised, also can add further deploy params

# clean up event
unregister-event -source "msg"

Remember the SQL 2012 DAC tools only work with PowerShell 3.0 as they have a .NET 4 dependency.

This was called within the Lab Build using the command line

image

cmd /c powershell $(BuildLocation)\SQLDeploy.ps1 dc $(BuildLocation)\Database.dacpac sabs

All my scripts worked correctly locally when I ran it on the command line, they were also starting from within the build, but failing with errors along the lines of

Deployment Task Logs for Machine: IIS
Accessing the following location using the lab service account: blackmarble\tfslab, \\store\drops.
Deploying the DB with the following settings
sqlserver:   dc
dacpac:
\\store\drops\Main.CI\Main.CI_20130314.2\Debug\Database.dacpac
dbname: Database1
Initializing deployment (Start)
Exception calling "Deploy" with "3" argument(s): "Could not deploy package."
Initializing deployment (Failed)
At
\\store\drops\Main.CI\Main.CI_20130314.2\Debug\SQLDeploy.ps1:26
char:2
+  $d.Deploy($dp, $dbname, $true) # the true is to allow an upgrade
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : DacServicesException
Stopped accessing the following location using the lab service account: blackmarble\tfslab, \\store\drops.

Though not obvious from the error message the issue was who the script was running as. The TFS agent runs as a machine account, this had no rights to access the SQL on the DC. Once I granted the computer account IIS$ suitable rights to the SQL box all was OK. The alternative would have been to enable mixed mode authentication and user a connection string in the form 

“server=dc;User ID=sa;Password=mypassword”

So now I can deploy my DB on a new build.