But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Experiences running multiple instances of 2010 build service on a single VM

I think my biggest issue with TFS2010 is the problem that a build controller is tied to a single Team Project Collection (TPC). For a company like mine where we run a TPC for each client this means we have had to start to generate a good number of virtualised build controller/agents. It is especially irritating as I know that the volume of builds on any given controller is low.

A while ago Jim Lamb blogged about how you could define multiple build services on a single box, but the post was full caveats on how it was not supported/recommended etc. Well since this post there has been some discussion on this technique and I think the general feeling is, yes it is not supported, but there is no reason it will not function perfectly well as long as you consider some basic limitations:

  1. The two build controllers don’t know about each other, so you can easily have two build running at the same time, this will have an unpredictable effect on performance.
  2. You have to make sure that the two instances don’t share any workspace disk locations, else they will potentially start overwriting each other
  3. Remember building code is usually IO locked not CPU locked, so when creating your build system think a lot about the disk, throwing memory and CPU will have little effect. The fact we run our build services on VMs and these us a SAN should mitigate much of this potential issue.
  4. The default when you install a controller/agent on a box is for one agent to be created for each core on the box. This rule is still a good idea, but if you are installing two controller/agent sets on a box make sure you don’t define more agents than cores (for me this means on by build VM I have to 2 virtual CPUs as I am running 2 controller/agent pairs)

Jims instructions are straight forward, but I did hit a couple of snags:

  • When you enter the command line to create the instance, make sure there a spaces after the equals for the parameters, else you get an error

sc.exe create buildMachine-collection2 binpath= "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\TfsBuildServiceHost.exe /NamedInstance:buildMachine-collection2" DisplayName= "Visual Studio Team Foundation Build Service Host (Collection2)"

  • I cannot stress enough how important it is give the new instances sensible names, especially as their numbers grow. Jim suggested naming after the TPC they service, for me this is bad move as at any given time were are working for a fairly small number of clients, but the list is changing as projects start and stop. It is therefore easier for me to name a controller for the machine is it hosted on as they will be reassigned between TPC based on need. So I settle on the names in the form ‘build1-collection2’ not a TPC base done. These are easy to associate with the VMs in use when you see them in VS2010
  • When I first tried to get this all up and ran the admin console for the command prompt I got the error shown below

image

After a bit of retyping this went away. I think it was down to stray spaces at end of SET variable, but not 100% sure over this. I would just make sure you strings match if you see this problem.

[Updated 26 Nov 2010] The batch file to start the management console is in the form

      set TFSBUILDSERVICEHOST=buildMachine-collection2 
      "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\tfsmgmt.exe"

Make sure that you run this batch file as administration (right click run as admin) if you don't the management console picks up the default instance

  • Also it is a good Idea to go into the PCs service and make sure your new build service instance is set to auto start, to avoid surprises on a reboot.
  • When you configure the new instance make sure you alter the port it runs on (red box below) I am just incrementing it for each new instance e.g. 9191 –> 9192. If you don’t alter this the service will not start as it’s endpoint will already be in use.
  • Also remember to set the identity of the build service run as (green box), usually [Domain]\TFSBuild, too easy to forget as well as you click through the create dialogs.

image

Once this is set you can start the service and configure the controller and agent(s) exactly as normal.

You might want to consider how the workspace is mapped to the your multiple controllers, so you use different root directories, but that is your call. Thus far leaving it all as it was when I was using a separate VM for each build is working fine for me.

We shall see how many services I can put onto single VM, but it is certainly something I don’t want to push to hard. However that said if you are like use with a relatively low load on the build system this has to be worth looking at to avoid proliferation of build VMs.

What is an .xesc file?

Test Professional, after the Lab Management update, now uses Expression Encoder 4.0 to create it video of screen activity. This means that when you run a test and record a video you end up with an attachment called ScreenCapture.xesc.

Now my PC did not have the Expression Encoder 4.0 installed, so did not know what to do with an .xesc file created within our Lab Management environment. To address this the answer is simple. On any PC that might want to view the video either:

  1. Install the Expression Encoder 4 
  2. or install just the Screen Capture Code

Once either of these is done, Media Player can play the .xesc file.

Cannot run CodeUI tests in Lab Management – getting a ’Build directory of the test run is not specified or does not exist’

Interesting user too stupid error today whist adding some CodeUI tests to a Lab Management deployment scenario.

I added the Test Case and associated it with Coded UI test in Visual Studio

image

I made sure my deployment build had the tests selected

image

I then ran my Lab Deployment build, but got the error

Build directory of the test run is not specified or does not exist.

This normally means the test VM cannot see the share containing the build. I checked the agent login on the test VM could view the drop location, that was OK, but when I looked for the assembly containing my coded UI tests was just not there.

Then I remembered……..

The Lab build can take loads of snapshots and do a sub-build of the actual product. This all very good for production scenarios, but when you are learning about Lab Management or debugging scripts it can be really slow. To speed up the process I had told my Deploy build to not take snapshots and the use the last compile/build drop it could find. I had just forgotten to rebuild my application on the build server after I had added the coded UI tests. So I rebuild that and tried again, but I got the same problem.

It turns out that though I was missing the assembly the error was before it was required. The true real error was not who the various agents were running as, but the account the test controller was running as. The key was to check the test run log. This can be accessed from the Test Run results (I seemed to have a blind spot looking for these result)

image

This showed problem, I had selected the default ‘Network Service’ account for the test controller and had not granted it rights to the drop location.

image

I changed the account to my tfs210lab account as used by the agents and all was OK.

image

Don’t hardcode that build option

I have been using the ExternalTestRunner 2010 Build activity I wrote. I realised that at least one of the parameters I need to set, the ProjectCollection used to publish the test results, was hard coded in my sample. It was set in the form

http://myserver:8080/tfs/MyCollection

This is not that sensible, as this value is available using the build API as

BuildDetail.BuildServer.TeamProjectCollection.Uri.ToString()

It makes no sense to hard code the name of the server if the build system already knows it.

This simple change means that the build templates can be fair easier past between Team Projects Collections

"Program too big to fit in memory" when installing a TFS 2010 Test Controller

Just spent a while battling a problem whilst install the TFS 2010 Test Controller. When I launched the install setup program off the .ISO  I could select the Test Controller installer, but then a command prompt flashed up and exited with no obvious error. If I went into the TestControllers directory on the mounted .ISO and ran the setup from a command prompt I saw the error "program too big to fit in memory".

As the box I was trying to use only had 1Gb of memory (below the recommended minimum), I upped it to 2Gb and then to 4Gb but still got the same error.

Turns out the problem was a corrupt .ISO once I had downloaded it again, and dropped by target VM to 2Gb of memory all was fine.

Running MSDeploy to a remote box from inside a TFS 2010 Build (Part 2)

Another follow up post, this time to the one on MSDeploy. As I said in that post a better way to trigger the MSDeploy PowerShell script would be as part of the build workflow, as opposed to a post build action in the MSBuild phase. Doing it this way means if the build failed testing, after MSBuild complete, you can still choose not to run MSDeploy.

I have implemented this using an InvokeProcess call in my build workflow, which I have placed just before Gated checking logic at the end of the process template.

image

The if statement is there so I only deploy if a deploy location is set and all the tests passed

BuildDetail.TestStatus = Microsoft.TeamFoundation.Build.Client.BuildPhaseStatus.Succeeded And
String.IsNullOrEmpty(DeployLocation) = False

The InvokeProcess filename property is

BuildDetail.DropLocation & "\_PublishedWebsites\" & WebSiteAssemblyName & "_Package\" & WebSiteAssemblyName & ".deploy.cmd"

Where “WebSiteAssemblyName” is a build argument the name of the Project that has been publish (I have not found a way to automatically detect it) e.g. BlackMarble.MyWebSite. This obviously as be set as an argument for the build if the deploy is to work

The arguments property is set to

"/M:http://" & DeployLocation & "/MSDEPLOYAGENTSERVICE /Y”

Again the “DeployLocation” is a build arguement that is the name of the server to deploy to e.g. MyServer

The Result property is set to an Integer build variable, so any error code can be returned in the WriteBuildError

This seems to work for me and I think it is neater than previous solution

How to edit a TFS 2010 build template when it contains custom activities.

I posted a while ago on using my Typemock TMockRunner Custom Activity for Team Build 2010. I left that post with the problem that if you wished to customise a template after you a had added the custom activity you had to use the somewhat complex branching model edit the XAML.

If you just followed the process in my post to put the build template in a new team project and tried to edit the XAML you got the following errors, an import namespace error and the associated inability to render part of the workflow

image

The best answer I have been able to find has been to put the custom activity into the GAC on the PC what you wish to edit the template on, just there nowhere else the method in the previous post is fine for build agents. So I strongly signed the custom activity assembly, used GACUTIL to put it in my GAC and was then able to load the template without any other alterations. I as also able to add it to my Visual Studio toolbox so that I could drop new instances of the external test runner onto the workflow.

Visual Studio 2010 Lab Management released announced

In the VSLive! keynote Microsoft made announcements about Lab Management, it will be RTM’d later this month and best of all it will be included as part of the benefits of the Visual Studio 2010 Ultimate with MSDN and Visual Studio Test Professional 2010 with MSDN SKUs. You can read more detail on Brian Keller’s blog

I think this is a great move on licensing, we had expect it to be purchasable addition to Visual Studio. With this change it now is consistent with TFS i.e. if you have the right SKU of Visual Studio and MSDN you get the feature. This greatly removes the barrier to entry for this technology.

I look forward to have a forthright discussion with our IT manager over Hyper-V cluster resources in the near future