But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

You can’t edit a TFS 2010 build workflow template with just Team Explorer installed

I tried to open a TFS 2010 build template within the Visual Studio shell (the bit that gets installed when you put Team Explorer onto a PC) and saw the error “The document contains errors that must be fixed before the designer can be loaded”.


At the bottom of the screen it showed that all the underling assemblies could not be found.

The solution is simple, install a ‘real version’ of Visual Studio, I put on Premium. It seems that the shell does not provide all the assemblies that are needed. Once I did this I could edit the XAML with no problems

PDC 2010 thoughts - the next morning

I sat in the office yesterday with a beer in my hand watching the PDC2010 keynote. I have to say I preferred this to the option of a flight, jet lag and a less than comfortable seat in a usually overly cooled conference hall. With the Silverlight streaming the experience was excellent, especially as we connected an Acer 1420P to our projector/audio via a single HDMI cable and it just worked.

So what do you lose by not flying out? Well the obvious is the ‘free’ Windows Phone 7 the attendees got; too many people IMHO get hooked up on the swag at conferences, you go for knowledge not toys. They also forget they (or their company) paid for item anyway in their conference fee. More seriously you miss out on the chats between the sessions, and as the conference is on campus the easier access to the Microsoft staff. Also the act of travelling to a conference isolates you from the day to day interruptions of the office, the online experience does not and you will have to stay up late to view sessions live due to timezones. The whole travelling experience still cannot be replaced by the online experience, not matter how good the streaming.

However, even though I don’t get the ‘conference corridor experience’ it does not mean I cannot check out sessions, it is great to see they are all available free and live, or immediately available recordings if I don’t want to stay up.

The keynote was pretty much as I had expected. There were new announcements but nothing that was ground breaking, but good vNext steps. I thought the best place to start for me was the session “Lessons learned from moving team foundation server to the cloud”, this was on TFS, and obvious area of interest for me, but more importantly no real world experience to move a complex application to Azure. This is something that is going to effect all of us if Microsoft’s bet on the cloud is correct. Seems, though there are many gottas, the process was not as bad as you would expect. For me the most interesting point was the port to Azure caused changes to the codebase that actually improved the original implementation either in manageability or performance. Also that many of the major stumbling blocks were business/charging models not technology. This is going to effect us all as we move to service platforms like Azure or even internally host equivalents like AppFabic

So one session watched, what to watch next?

Common confusion I have seen with Visual Studio 2010 Lab Management

With any new product there can be some confusion over the exact range and scope of features, this is just as true for VS2010 Lab Management as any other. In fact given the number of moving parts (infrastructure you need in place to get it running) it can be more confusing than average. In this post I will cover the questions I have seen most often.

What does ‘Network Isolation’ really mean?

The biggest confusion I have seen is that Lab Management allows you to run a number of copies of a given test environment, each instance of an environment is ‘network isolated’ from the others. This means that each instance of the environment can have server VMs named the same without errors being generated. WHAT IS DOES NOT MEAN is that each of these environments are fully isolated from your corporate or test LAN. Think about it, how could this work? I am sad to say it there is still no shipment date for Microsoft Magic Pixie Net (MMPN), until this is available then we will still need a logical connection to any virtual machine under test else we cannot control/monitor it.

So what does ‘Network Isolation’ actually mean? Well it basically means Lab Manager will add a second network card to each VM in your environment (with the exception of domain controllers, I will come back to that). These secondary connections are the way you usually manage the VMs in the environment, so you end up with something like the following


Lab Manager creates the 192.168.23.x virtual LAN which all the VMs in the environment connect to. If you want to change the IP address range this is set in the TFS administration console, but I suspect needing to change this will be rare.

If the PCs in your environment are in a workgroup the is no more to do, but if you have a domain within your environment (i.e. you included a test domain controller VM in your environment, as shown above) you also need to tell the Lab Management environment which server is the domain controller. THIS IS VERY IMPORTANT. This is done in the Visual Studio 2010 Test Manager application where you setup the environment.

When all this is done and the environment is deployed, a second LAN card is added to all VMs in the environment (with the exception of the domain controller you told it about, if present). These LAN cards are connected to the corporate LAN and an IP address is provided using your corporate LAN DHCP server and assign a name in the form something like LAB[Guid].corpdomain.com (you can alter this domain name to something like LAB[Guid].test.corpdomain.com if you want in TFS administration console). This second LAN is a special connection in that the VMs NETBIOS names are not broadcast over it onto the corporate LAN, thus allowing multiple copies of the ‘network isolated’ to be run. In each case each VM will have a unique name on the corporate LAN, but the original names within the test (192.168.x.x) environment.

Other than blocking NETBIOS, the ‘special connection’ is in no other way restricted. So any of the test VMs can use their own connection to the corporate LAN to access any corporate (or internet resources) such as patch update servers. The only requirement will be to login to the corporate domain if authentication is required, remember on the test environment you will be logged into the test domain or local workgroup.

I mentioned that the test domain controller is not connected to the corporate LAN. This is to make sure corporate users don’t try to authenticate against it by mistake and to stop different copies of the test domain controller trying to sync.

All clear? So ‘network isolated’ does not mean fully isolated, but the ability to have multiple copies of the same environment running at the same time with the magic done behind the scenes auto-magically by Lab Management. Maybe not the best piece of feature naming in the world!

So how does a tester actually connect to the test VMs from their PC?

Well obviously they don’t use a magic MMPN connection, there has to be a valid logical connection. There are actually two possible answers here; I suspect the most common will be via remote desktop straight to the guest test VMs, this will be via the LAB[Guid].corpdomain.com name. You might be thinking how do I know these IDs, well you can get them from the Test Manager application looking at VMs system info in any running environment. Because you can look them up in this way, a tester can either use the Windows RDP application itself or more probably just connect to the VMs from within Test Manager where it will use RDP behind the scenes.

The other option is to use what is called a host connection. This is when Test Manager connects to the test VMs via the Hyper-V host. For this to work the tester needs suitable Hyper-V rights and the correct tools on their local PC, not just the Test Manager. This could also be achieved using the Hyper-V manager or SCVMM console. Host mode is the way you would use to connect to a test domain controller that has no direct connection to a corporate LAN.

The choice of connection and tool will depend on what the tester is trying to do. I would expect Test Manager to be the tool of choice in most cases.

Do I need Network Isolation – is there another option?

This all depends on what you want to do, there are good description of the possible architectures in Lab Management documentation. If you don’t think ‘network isolation’ as described above is right for you the only other option that can provide similar environment separation is to not run them ‘network isolate’ but to provided the environment with a single explicit connection to the corporate LAN via a firewall such as TMG allow there connection.

This goes without saying is more complex than using the standard ‘network isolated’ model built into Lab Management, so make sure it is really worth the effort before starting down this route.

What agents do I need to install?

There are a number of agents involved in Lab Management, these allow network isolation management, deployment and testing. The ones you need depend on what you are trying to do. If you want all the feature, not unsurprisingly you need them all. If this is what you want to do then use the VMPrep tool, it makes life easier. If you don’t want it all (and it might be easier to just install all of them as standard) you can choose.

If you want to gather test data you need the test agent, and you want to deploy code you need the lab workflow agent. The less obvious one is that for ‘network isolation’ you need the Lab Agent installed, it is though this agent that network isolation LAN is configured.

Any other limitations I might have missed?

The most obvious is that many companies will use failover clustering and a SAN to make a resilient Hyper-V cluster. Unfortunately technology his is not currently supported by Lab Management. This is easy to miss as it is only referred to once in the documentation to my knowledge in an FAQ section.

The effect of this is to not allow shared SAN storage between any Hyper-V hosts or more importantly between the VMM Library and the Hyper-V hosts. This means that all deployment of environments has to be over the LAN, the faster SAN to SAN operations cannot be used as these need clustering.

I suppose there is also the limitation of no clustering that you cannot hot migrate environments around between Hyper-V hosts, but I don’t see this as much of an issue, these are meant to be lab test environments, not live production high resilience VMs.

This is a good reason to make sure that you separate you production Hyper-V hosts from your test ones,. Make the production servers a failover cluster and the test one just a hosts group. Let Lab Manager work out which server in the host group (assuming there is more than one) to place the environment on.



So I hope that helps a bit. I am sure I will find more common question, I will post about them as they emerge.

Experiences running multiple instances of 2010 build service on a single VM

I think my biggest issue with TFS2010 is the problem that a build controller is tied to a single Team Project Collection (TPC). For a company like mine where we run a TPC for each client this means we have had to start to generate a good number of virtualised build controller/agents. It is especially irritating as I know that the volume of builds on any given controller is low.

A while ago Jim Lamb blogged about how you could define multiple build services on a single box, but the post was full caveats on how it was not supported/recommended etc. Well since this post there has been some discussion on this technique and I think the general feeling is, yes it is not supported, but there is no reason it will not function perfectly well as long as you consider some basic limitations:

  1. The two build controllers don’t know about each other, so you can easily have two build running at the same time, this will have an unpredictable effect on performance.
  2. You have to make sure that the two instances don’t share any workspace disk locations, else they will potentially start overwriting each other
  3. Remember building code is usually IO locked not CPU locked, so when creating your build system think a lot about the disk, throwing memory and CPU will have little effect. The fact we run our build services on VMs and these us a SAN should mitigate much of this potential issue.
  4. The default when you install a controller/agent on a box is for one agent to be created for each core on the box. This rule is still a good idea, but if you are installing two controller/agent sets on a box make sure you don’t define more agents than cores (for me this means on by build VM I have to 2 virtual CPUs as I am running 2 controller/agent pairs)

Jims instructions are straight forward, but I did hit a couple of snags:

  • When you enter the command line to create the instance, make sure there a spaces after the equals for the parameters, else you get an error

sc.exe create buildMachine-collection2 binpath= "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\TfsBuildServiceHost.exe /NamedInstance:buildMachine-collection2" DisplayName= "Visual Studio Team Foundation Build Service Host (Collection2)"

  • I cannot stress enough how important it is give the new instances sensible names, especially as their numbers grow. Jim suggested naming after the TPC they service, for me this is bad move as at any given time were are working for a fairly small number of clients, but the list is changing as projects start and stop. It is therefore easier for me to name a controller for the machine is it hosted on as they will be reassigned between TPC based on need. So I settle on the names in the form ‘build1-collection2’ not a TPC base done. These are easy to associate with the VMs in use when you see them in VS2010
  • When I first tried to get this all up and ran the admin console for the command prompt I got the error shown below


After a bit of retyping this went away. I think it was down to stray spaces at end of SET variable, but not 100% sure over this. I would just make sure you strings match if you see this problem.

[Updated 26 Nov 2010] The batch file to start the management console is in the form

      set TFSBUILDSERVICEHOST=buildMachine-collection2 
      "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\tfsmgmt.exe"

Make sure that you run this batch file as administration (right click run as admin) if you don't the management console picks up the default instance

  • Also it is a good Idea to go into the PCs service and make sure your new build service instance is set to auto start, to avoid surprises on a reboot.
  • When you configure the new instance make sure you alter the port it runs on (red box below) I am just incrementing it for each new instance e.g. 9191 –> 9192. If you don’t alter this the service will not start as it’s endpoint will already be in use.
  • Also remember to set the identity of the build service run as (green box), usually [Domain]\TFSBuild, too easy to forget as well as you click through the create dialogs.


Once this is set you can start the service and configure the controller and agent(s) exactly as normal.

You might want to consider how the workspace is mapped to the your multiple controllers, so you use different root directories, but that is your call. Thus far leaving it all as it was when I was using a separate VM for each build is working fine for me.

We shall see how many services I can put onto single VM, but it is certainly something I don’t want to push to hard. However that said if you are like use with a relatively low load on the build system this has to be worth looking at to avoid proliferation of build VMs.

What is an .xesc file?

Test Professional, after the Lab Management update, now uses Expression Encoder 4.0 to create it video of screen activity. This means that when you run a test and record a video you end up with an attachment called ScreenCapture.xesc.

Now my PC did not have the Expression Encoder 4.0 installed, so did not know what to do with an .xesc file created within our Lab Management environment. To address this the answer is simple. On any PC that might want to view the video either:

  1. Install the Expression Encoder 4 
  2. or install just the Screen Capture Code

Once either of these is done, Media Player can play the .xesc file.

Cannot run CodeUI tests in Lab Management – getting a ’Build directory of the test run is not specified or does not exist’

Interesting user too stupid error today whist adding some CodeUI tests to a Lab Management deployment scenario.

I added the Test Case and associated it with Coded UI test in Visual Studio


I made sure my deployment build had the tests selected


I then ran my Lab Deployment build, but got the error

Build directory of the test run is not specified or does not exist.

This normally means the test VM cannot see the share containing the build. I checked the agent login on the test VM could view the drop location, that was OK, but when I looked for the assembly containing my coded UI tests was just not there.

Then I remembered……..

The Lab build can take loads of snapshots and do a sub-build of the actual product. This all very good for production scenarios, but when you are learning about Lab Management or debugging scripts it can be really slow. To speed up the process I had told my Deploy build to not take snapshots and the use the last compile/build drop it could find. I had just forgotten to rebuild my application on the build server after I had added the coded UI tests. So I rebuild that and tried again, but I got the same problem.

It turns out that though I was missing the assembly the error was before it was required. The true real error was not who the various agents were running as, but the account the test controller was running as. The key was to check the test run log. This can be accessed from the Test Run results (I seemed to have a blind spot looking for these result)


This showed problem, I had selected the default ‘Network Service’ account for the test controller and had not granted it rights to the drop location.


I changed the account to my tfs210lab account as used by the agents and all was OK.


Don’t hardcode that build option

I have been using the ExternalTestRunner 2010 Build activity I wrote. I realised that at least one of the parameters I need to set, the ProjectCollection used to publish the test results, was hard coded in my sample. It was set in the form


This is not that sensible, as this value is available using the build API as


It makes no sense to hard code the name of the server if the build system already knows it.

This simple change means that the build templates can be fair easier past between Team Projects Collections

"Program too big to fit in memory" when installing a TFS 2010 Test Controller

Just spent a while battling a problem whilst install the TFS 2010 Test Controller. When I launched the install setup program off the .ISO  I could select the Test Controller installer, but then a command prompt flashed up and exited with no obvious error. If I went into the TestControllers directory on the mounted .ISO and ran the setup from a command prompt I saw the error "program too big to fit in memory".

As the box I was trying to use only had 1Gb of memory (below the recommended minimum), I upped it to 2Gb and then to 4Gb but still got the same error.

Turns out the problem was a corrupt .ISO once I had downloaded it again, and dropped by target VM to 2Gb of memory all was fine.

Running MSDeploy to a remote box from inside a TFS 2010 Build (Part 2)

Another follow up post, this time to the one on MSDeploy. As I said in that post a better way to trigger the MSDeploy PowerShell script would be as part of the build workflow, as opposed to a post build action in the MSBuild phase. Doing it this way means if the build failed testing, after MSBuild complete, you can still choose not to run MSDeploy.

I have implemented this using an InvokeProcess call in my build workflow, which I have placed just before Gated checking logic at the end of the process template.


The if statement is there so I only deploy if a deploy location is set and all the tests passed

BuildDetail.TestStatus = Microsoft.TeamFoundation.Build.Client.BuildPhaseStatus.Succeeded And
String.IsNullOrEmpty(DeployLocation) = False

The InvokeProcess filename property is

BuildDetail.DropLocation & "\_PublishedWebsites\" & WebSiteAssemblyName & "_Package\" & WebSiteAssemblyName & ".deploy.cmd"

Where “WebSiteAssemblyName” is a build argument the name of the Project that has been publish (I have not found a way to automatically detect it) e.g. BlackMarble.MyWebSite. This obviously as be set as an argument for the build if the deploy is to work

The arguments property is set to

"/M:http://" & DeployLocation & "/MSDEPLOYAGENTSERVICE /Y”

Again the “DeployLocation” is a build arguement that is the name of the server to deploy to e.g. MyServer

The Result property is set to an Integer build variable, so any error code can be returned in the WriteBuildError

This seems to work for me and I think it is neater than previous solution