But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Renaming branches in TFS2010

I recently was asked why a client had experienced some unexpected results when merging a development branch back into the main trunk on a TFS 2010 installation.

Turns out the issue was that during some tests that the both the Main and Dev branches had been renamed, and new branches of the same names created. So they had a structure like this:

  Dev Newly created after the rename
  Main Newly created after the rename
  Old_Dev Renamed of Dev
  Old_Main          Renamed of Main


In TFS 2010 behind the scenes a rename is actually a branch and delete process, this meant we ended up with the new branch, but also a deleted branch of the old name. This is not obvious unless you have ‘show deleted items in source control explorer’ enabled in the Visual Studio option


Once you recreate a new branch with the same name as one of the old named branches this deleted branch is replaced by the newly created one, It has taken up the old ‘slot’ (see links at end).This means that when you try to do a merge the merge tools sees this recreated branch as a potential target, and so shows it in the merge dialog. With all the potential confusion that might cause.


So the simple answer is try to avoid renames, and especially try to avoid creating new branches in the same ‘slot’ as deleted/renamed ones.

For a more detailed explanations of that is going on here have a look at this post on renaming branches in TFS and this one on ‘slot mode’ in 2010 version.

Follow up to yesterdays events on ‘enabling agile development with cool tools’

Thanks to everyone who attended yesterdays Black Marble event ‘Enabling agile development with cool tools’, both Gary Short’s and my sessions seemed well received. I was asked if my slides would be available anywhere, well the answer is no. The reason for this is that my session was mostly demo driven, so the slides just set the scene. After a bit of thought, a quick blog post seems a better option;  so this post covers the same basic points as the session. If you are interested in any of the products I would urge you to download them and give them a go. Many are free and all have at least a free fully functional evaluation edition.

So the essence of my session was on the project management/administrative side of agile projects. The key here is communication both inside and outside of the immediate project team. How to we capture and distribute information so it assists the project not hampers it?

Traditionally the physical taskboard, with moving moving some form of postcards around has been the answer. This is a great solution as long as the team is co-located and that there is no need for a detailed on going record of the historic state of the tasks (maybe a requirement for legal reasons, but then maybe a daily digital photo would do?). Anyway many teams find they need to capture this information in some electronic form. In my session I looked at some of the options with TFS2010

What is built into TFS2010?

As TFS has a single work item store you can edit work items with a wide variety of clients. In the box you have tools to edit work items via Visual Studio, SharePoint, Team Web Access as well as the ability to manage work items in Excel and Project.

What if I live in Outlook?

If you want to do all you work item management in Outlook then have a look at Ekobit’s TeamCompanion. This in effect allows you to treat work items in a similar manner to email, and cross between the two. So you can create a work item from an email and vice versa; it also allows the managing work items in batches. This product strikes me was very well suited to an email based support desk or project manager that is meeting or email orientated, maybe dealing with people who do not themselves have access to TFS, just email.

How can I replicate my physical taskboard?

For many teams the capture of the physical taskboard information is the key. I have always found a good way to make sure TFS work items are up to date is to have all the work items associated with the tasks on the taskboard returned via a TFS query and then in Excel, as the daily stand up is done, make sure each task is up to date.

However, some people like to work more visually than that, so in the session I looked at a couple of desktop applications that allow work item management both in a form editing manner and via taskboard like drag and drop operations. These were Telerik’s Work Item Manager and EMC’s TFS Work Bench.

However for many companies adding another desktop application to a controlled IT PC can be a problem so I also had a look at Urban Turtle an add-in to Team Web Access that allows a more visual taskboard approach with in a browser by adding a couple of tabs to those  in the standard Team Web Access product.

But what about outside the team?

All the products I showed in the first half of the session were in essence work item editors, a team could choose to use any or all of them. This does not however really help with getting information out to interested parties beyond the team; for this we need publically accessible Information Radiators. The information on these needs to change over time and be easy to understand.

The output of the team focused tools may be just what you need here, maybe a chart printed out and stuck to a notice board will do, but there are some other options.

The first is that there are a rich set of reports in TFS, available both as Reporting Services reports and Excel charts. Reporting Services is particularity interesting as it can deliver reports to interested parties on a scheduled e.g. the CTO get the project burn down emailed as a PDF every Monday morning. There is also the option to deliver reports to central information sites such as Intranet SharePoint servers for everyone to see.

But what do you do if you want something a bit more striking, something that does not require a person to look on a web site or open their email? Maybe a big screen showing what is going on in the project? I showed two products to do this one was Telerik’s Project Dashboard and the other a version our Black Marble internal BuildWallboard, written using the TFS API.

So in summary, in my opinion the key differentiator for TFS over ALM solutions built for a set of different vendors products is that there is a single store for all work items so a wide range of editing an reporting tools can be bought to bear without having to worry over whether the information you are working with is the going to be passed correctly between the various components of the system.

So again I would urge you that if you use TFS have a look at these product, and the many others that are out there, given them a go and see which ones may assist your process. Remember agile is all about continuous improved isn’t it, so give it a try

Preparing for my session next week on ‘enabling agile development with cool tools’

I have spent today preparing my presentation and demos for the Black Marble event next week Enabling Agile Development with Cool Tools. I will be presenting with Gary Short of DevExpress. He is going to be talking about refactoring under the intriguing title ‘How to Eat an Elephant’.

My session will be on the tools to aid the project management side of the ALM process. Specifically the tools available for TFS 2010 both those ‘out the box’ and from third party vendors. I only have a hour slot, so I have had to be selective as there are may ‘cool tools’ to choose from. So after some thought I have chosen

Urban Turtle 
Telerik Work Item Manager and Project Dashboard
Ekobit TeamCompanion
EMC TFS Work Bench

Should be a good session, there are certainly some great tools in this list.

TF215097 error when using a custom build activity

Whist trying to make use of a custom build activity I got the error

TF215097: An error occurred while initializing a build for build definition \Tfsdemo1\Candy: Cannot create unknown type '{clr-namespace:TfsBuildExtensions.Activities.CodeQuality;assembly=TfsBuildExtensions.Activities.StyleCop}StyleCop'

This occurred when the TFS 2010 build controller tried to parse the build process .XAML at the start of the build process. A check of all the logs gave no other information other than this error message, nothing else appeared to have occurred.

If I removed the custom activity from the build process all was OK and the build worked fine.

So my initial though was that the required assembly was not loaded into source control and the ‘version control path to custom assemblies’ set. However on checking the file was there and the path set.

What I had forgotten was that this custom activity assembly had a reference to a TfsBuildExtensions.Activities assembly that contained a base class. It was not that the named assembly was missing but that it could not be loaded because a required assembly was missing. Unfortunately there was no clue to this in the error message or logs.

So if you see this problem check for references you might have forgotten and make sure ALL the required assemblies are loaded into source control on the control path for custom assemblies used by the build controller

Adding a Visual Basic 6 project to a TFS 2010 Build

Adding a Visual Basic 6 project to your TFS 2010 build process is not are hard as I had expected it to be. I had assumed I would have to write a custom build workflow template, but it turned out I was able to use the default template with just a few parameters changed from their defaults. This is the process I followed.

I created a basic ‘Hello world’ VB6 application. I had previously made sure that my copy of VB6 (SP6) could connect to my TFS 2010 server using the Team Foundation Server MSSCCI Provider so was able to check this project into source control.

Next I created a MSbuild script capable building the VB project, as follows

<Project ToolsVersion="4.0" DefaultTargets="Default" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <TPath>C:\Program Files\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks</TPath>
    <TPath Condition="Exists('C:\Program Files (x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks')">C:\Program Files (x86)\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks </TPath>
  <Import Project="$(TPath)"/>
    <VBPath>C:\Program Files\Microsoft Visual Studio\VB98\VB6.exe</VBPath>
    <VBPath Condition="Exists('C:\Program Files (x86)\Microsoft Visual Studio\VB98\VB6.exe')">C:\Program Files (x86)\Microsoft Visual Studio\VB98\VB6.exe</VBPath>
    <ProjectsToBuild Include="Project1.vbp">
      <!-- Note the special use of ChgPropVBP metadata to change project properties at Build Time -->
  <Target Name="Default">
    <!-- Build a collection of VB6 projects -->
    <MSBuild.ExtensionPack.VisualStudio.VB6 TaskAction="Build" Projects="@(ProjectsToBuild)" VB6Path="$(VBPath)"/>
  <Target Name="clean">
    <Message Text="Cleaning - this is where the deletes would go"/>

This used the MSBuildExtensions task to call VB6 from MSBuild, this MSI needed to be installed on the PC being used for development. Points to note about this script are:

  • I wanted this build to work on both 32bit and 64bit machines so I had to check both the “Program Files” and “Program Files (x86)” directories, the Condition flag is useful for this (I could have used an environment variable as an alternative method).
  • The output directory is set to $(OutDir). This is a parameter that will be passed into the MSBuild process (and is in turn set to a Team Build variable by the workflow template so that the build system can find the built files and copy them to the TFS drop directory).

This MSBuild script file can be tested locally on a development PC using the MSBUILD.EXE from the .NET Framework directory. When I was happy with the build script, I stored it under source control in the same location as the VB project files (though any location in source control would have done)

The next step was to create a new Team Build using the default build template with a workspace containing my VB6 project.

The first thing to edit was the ‘Items to Build’. I deleted whatever was in the list (sorry can’t remember what was there by default). I then added the build.xml file I had just created and stored in source control


I then tried to run the build, this if course failed as I needed to install VB6 (SP6) and the MSBuildExtensions on the build server. Once this was done I tried the build again and it work. The only issue was I got a warning that there were no assemblies that Code Analysis could be run against. So I went into the build’s parameters and switched of code analysis and testing as these were not required on this build.

So the process of build ingVB6 on TFS 2010 turned out to much easier than I expect, it just goes to show how flexible the build system in TFS 2010 is. As long as you can express your build as an MSBUILD file it should just work.

You can’t edit a TFS 2010 build workflow template with just Team Explorer installed

I tried to open a TFS 2010 build template within the Visual Studio shell (the bit that gets installed when you put Team Explorer onto a PC) and saw the error “The document contains errors that must be fixed before the designer can be loaded”.


At the bottom of the screen it showed that all the underling assemblies could not be found.

The solution is simple, install a ‘real version’ of Visual Studio, I put on Premium. It seems that the shell does not provide all the assemblies that are needed. Once I did this I could edit the XAML with no problems

PDC 2010 thoughts - the next morning

I sat in the office yesterday with a beer in my hand watching the PDC2010 keynote. I have to say I preferred this to the option of a flight, jet lag and a less than comfortable seat in a usually overly cooled conference hall. With the Silverlight streaming the experience was excellent, especially as we connected an Acer 1420P to our projector/audio via a single HDMI cable and it just worked.

So what do you lose by not flying out? Well the obvious is the ‘free’ Windows Phone 7 the attendees got; too many people IMHO get hooked up on the swag at conferences, you go for knowledge not toys. They also forget they (or their company) paid for item anyway in their conference fee. More seriously you miss out on the chats between the sessions, and as the conference is on campus the easier access to the Microsoft staff. Also the act of travelling to a conference isolates you from the day to day interruptions of the office, the online experience does not and you will have to stay up late to view sessions live due to timezones. The whole travelling experience still cannot be replaced by the online experience, not matter how good the streaming.

However, even though I don’t get the ‘conference corridor experience’ it does not mean I cannot check out sessions, it is great to see they are all available free and live, or immediately available recordings if I don’t want to stay up.

The keynote was pretty much as I had expected. There were new announcements but nothing that was ground breaking, but good vNext steps. I thought the best place to start for me was the session “Lessons learned from moving team foundation server to the cloud”, this was on TFS, and obvious area of interest for me, but more importantly no real world experience to move a complex application to Azure. This is something that is going to effect all of us if Microsoft’s bet on the cloud is correct. Seems, though there are many gottas, the process was not as bad as you would expect. For me the most interesting point was the port to Azure caused changes to the codebase that actually improved the original implementation either in manageability or performance. Also that many of the major stumbling blocks were business/charging models not technology. This is going to effect us all as we move to service platforms like Azure or even internally host equivalents like AppFabic

So one session watched, what to watch next?

Common confusion I have seen with Visual Studio 2010 Lab Management

With any new product there can be some confusion over the exact range and scope of features, this is just as true for VS2010 Lab Management as any other. In fact given the number of moving parts (infrastructure you need in place to get it running) it can be more confusing than average. In this post I will cover the questions I have seen most often.

What does ‘Network Isolation’ really mean?

The biggest confusion I have seen is that Lab Management allows you to run a number of copies of a given test environment, each instance of an environment is ‘network isolated’ from the others. This means that each instance of the environment can have server VMs named the same without errors being generated. WHAT IS DOES NOT MEAN is that each of these environments are fully isolated from your corporate or test LAN. Think about it, how could this work? I am sad to say it there is still no shipment date for Microsoft Magic Pixie Net (MMPN), until this is available then we will still need a logical connection to any virtual machine under test else we cannot control/monitor it.

So what does ‘Network Isolation’ actually mean? Well it basically means Lab Manager will add a second network card to each VM in your environment (with the exception of domain controllers, I will come back to that). These secondary connections are the way you usually manage the VMs in the environment, so you end up with something like the following


Lab Manager creates the 192.168.23.x virtual LAN which all the VMs in the environment connect to. If you want to change the IP address range this is set in the TFS administration console, but I suspect needing to change this will be rare.

If the PCs in your environment are in a workgroup the is no more to do, but if you have a domain within your environment (i.e. you included a test domain controller VM in your environment, as shown above) you also need to tell the Lab Management environment which server is the domain controller. THIS IS VERY IMPORTANT. This is done in the Visual Studio 2010 Test Manager application where you setup the environment.

When all this is done and the environment is deployed, a second LAN card is added to all VMs in the environment (with the exception of the domain controller you told it about, if present). These LAN cards are connected to the corporate LAN and an IP address is provided using your corporate LAN DHCP server and assign a name in the form something like LAB[Guid].corpdomain.com (you can alter this domain name to something like LAB[Guid].test.corpdomain.com if you want in TFS administration console). This second LAN is a special connection in that the VMs NETBIOS names are not broadcast over it onto the corporate LAN, thus allowing multiple copies of the ‘network isolated’ to be run. In each case each VM will have a unique name on the corporate LAN, but the original names within the test (192.168.x.x) environment.

Other than blocking NETBIOS, the ‘special connection’ is in no other way restricted. So any of the test VMs can use their own connection to the corporate LAN to access any corporate (or internet resources) such as patch update servers. The only requirement will be to login to the corporate domain if authentication is required, remember on the test environment you will be logged into the test domain or local workgroup.

I mentioned that the test domain controller is not connected to the corporate LAN. This is to make sure corporate users don’t try to authenticate against it by mistake and to stop different copies of the test domain controller trying to sync.

All clear? So ‘network isolated’ does not mean fully isolated, but the ability to have multiple copies of the same environment running at the same time with the magic done behind the scenes auto-magically by Lab Management. Maybe not the best piece of feature naming in the world!

So how does a tester actually connect to the test VMs from their PC?

Well obviously they don’t use a magic MMPN connection, there has to be a valid logical connection. There are actually two possible answers here; I suspect the most common will be via remote desktop straight to the guest test VMs, this will be via the LAB[Guid].corpdomain.com name. You might be thinking how do I know these IDs, well you can get them from the Test Manager application looking at VMs system info in any running environment. Because you can look them up in this way, a tester can either use the Windows RDP application itself or more probably just connect to the VMs from within Test Manager where it will use RDP behind the scenes.

The other option is to use what is called a host connection. This is when Test Manager connects to the test VMs via the Hyper-V host. For this to work the tester needs suitable Hyper-V rights and the correct tools on their local PC, not just the Test Manager. This could also be achieved using the Hyper-V manager or SCVMM console. Host mode is the way you would use to connect to a test domain controller that has no direct connection to a corporate LAN.

The choice of connection and tool will depend on what the tester is trying to do. I would expect Test Manager to be the tool of choice in most cases.

Do I need Network Isolation – is there another option?

This all depends on what you want to do, there are good description of the possible architectures in Lab Management documentation. If you don’t think ‘network isolation’ as described above is right for you the only other option that can provide similar environment separation is to not run them ‘network isolate’ but to provided the environment with a single explicit connection to the corporate LAN via a firewall such as TMG allow there connection.

This goes without saying is more complex than using the standard ‘network isolated’ model built into Lab Management, so make sure it is really worth the effort before starting down this route.

What agents do I need to install?

There are a number of agents involved in Lab Management, these allow network isolation management, deployment and testing. The ones you need depend on what you are trying to do. If you want all the feature, not unsurprisingly you need them all. If this is what you want to do then use the VMPrep tool, it makes life easier. If you don’t want it all (and it might be easier to just install all of them as standard) you can choose.

If you want to gather test data you need the test agent, and you want to deploy code you need the lab workflow agent. The less obvious one is that for ‘network isolation’ you need the Lab Agent installed, it is though this agent that network isolation LAN is configured.

Any other limitations I might have missed?

The most obvious is that many companies will use failover clustering and a SAN to make a resilient Hyper-V cluster. Unfortunately technology his is not currently supported by Lab Management. This is easy to miss as it is only referred to once in the documentation to my knowledge in an FAQ section.

The effect of this is to not allow shared SAN storage between any Hyper-V hosts or more importantly between the VMM Library and the Hyper-V hosts. This means that all deployment of environments has to be over the LAN, the faster SAN to SAN operations cannot be used as these need clustering.

I suppose there is also the limitation of no clustering that you cannot hot migrate environments around between Hyper-V hosts, but I don’t see this as much of an issue, these are meant to be lab test environments, not live production high resilience VMs.

This is a good reason to make sure that you separate you production Hyper-V hosts from your test ones,. Make the production servers a failover cluster and the test one just a hosts group. Let Lab Manager work out which server in the host group (assuming there is more than one) to place the environment on.



So I hope that helps a bit. I am sure I will find more common question, I will post about them as they emerge.

Experiences running multiple instances of 2010 build service on a single VM

I think my biggest issue with TFS2010 is the problem that a build controller is tied to a single Team Project Collection (TPC). For a company like mine where we run a TPC for each client this means we have had to start to generate a good number of virtualised build controller/agents. It is especially irritating as I know that the volume of builds on any given controller is low.

A while ago Jim Lamb blogged about how you could define multiple build services on a single box, but the post was full caveats on how it was not supported/recommended etc. Well since this post there has been some discussion on this technique and I think the general feeling is, yes it is not supported, but there is no reason it will not function perfectly well as long as you consider some basic limitations:

  1. The two build controllers don’t know about each other, so you can easily have two build running at the same time, this will have an unpredictable effect on performance.
  2. You have to make sure that the two instances don’t share any workspace disk locations, else they will potentially start overwriting each other
  3. Remember building code is usually IO locked not CPU locked, so when creating your build system think a lot about the disk, throwing memory and CPU will have little effect. The fact we run our build services on VMs and these us a SAN should mitigate much of this potential issue.
  4. The default when you install a controller/agent on a box is for one agent to be created for each core on the box. This rule is still a good idea, but if you are installing two controller/agent sets on a box make sure you don’t define more agents than cores (for me this means on by build VM I have to 2 virtual CPUs as I am running 2 controller/agent pairs)

Jims instructions are straight forward, but I did hit a couple of snags:

  • When you enter the command line to create the instance, make sure there a spaces after the equals for the parameters, else you get an error

sc.exe create buildMachine-collection2 binpath= "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\TfsBuildServiceHost.exe /NamedInstance:buildMachine-collection2" DisplayName= "Visual Studio Team Foundation Build Service Host (Collection2)"

  • I cannot stress enough how important it is give the new instances sensible names, especially as their numbers grow. Jim suggested naming after the TPC they service, for me this is bad move as at any given time were are working for a fairly small number of clients, but the list is changing as projects start and stop. It is therefore easier for me to name a controller for the machine is it hosted on as they will be reassigned between TPC based on need. So I settle on the names in the form ‘build1-collection2’ not a TPC base done. These are easy to associate with the VMs in use when you see them in VS2010
  • When I first tried to get this all up and ran the admin console for the command prompt I got the error shown below


After a bit of retyping this went away. I think it was down to stray spaces at end of SET variable, but not 100% sure over this. I would just make sure you strings match if you see this problem.

[Updated 26 Nov 2010] The batch file to start the management console is in the form

      set TFSBUILDSERVICEHOST=buildMachine-collection2 
      "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\tfsmgmt.exe"

Make sure that you run this batch file as administration (right click run as admin) if you don't the management console picks up the default instance

  • Also it is a good Idea to go into the PCs service and make sure your new build service instance is set to auto start, to avoid surprises on a reboot.
  • When you configure the new instance make sure you alter the port it runs on (red box below) I am just incrementing it for each new instance e.g. 9191 –> 9192. If you don’t alter this the service will not start as it’s endpoint will already be in use.
  • Also remember to set the identity of the build service run as (green box), usually [Domain]\TFSBuild, too easy to forget as well as you click through the create dialogs.


Once this is set you can start the service and configure the controller and agent(s) exactly as normal.

You might want to consider how the workspace is mapped to the your multiple controllers, so you use different root directories, but that is your call. Thus far leaving it all as it was when I was using a separate VM for each build is working fine for me.

We shall see how many services I can put onto single VM, but it is certainly something I don’t want to push to hard. However that said if you are like use with a relatively low load on the build system this has to be worth looking at to avoid proliferation of build VMs.