But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Experiences running multiple instances of 2010 build service on a single VM

I think my biggest issue with TFS2010 is the problem that a build controller is tied to a single Team Project Collection (TPC). For a company like mine where we run a TPC for each client this means we have had to start to generate a good number of virtualised build controller/agents. It is especially irritating as I know that the volume of builds on any given controller is low.

A while ago Jim Lamb blogged about how you could define multiple build services on a single box, but the post was full caveats on how it was not supported/recommended etc. Well since this post there has been some discussion on this technique and I think the general feeling is, yes it is not supported, but there is no reason it will not function perfectly well as long as you consider some basic limitations:

  1. The two build controllers don’t know about each other, so you can easily have two build running at the same time, this will have an unpredictable effect on performance.
  2. You have to make sure that the two instances don’t share any workspace disk locations, else they will potentially start overwriting each other
  3. Remember building code is usually IO locked not CPU locked, so when creating your build system think a lot about the disk, throwing memory and CPU will have little effect. The fact we run our build services on VMs and these us a SAN should mitigate much of this potential issue.
  4. The default when you install a controller/agent on a box is for one agent to be created for each core on the box. This rule is still a good idea, but if you are installing two controller/agent sets on a box make sure you don’t define more agents than cores (for me this means on by build VM I have to 2 virtual CPUs as I am running 2 controller/agent pairs)

Jims instructions are straight forward, but I did hit a couple of snags:

  • When you enter the command line to create the instance, make sure there a spaces after the equals for the parameters, else you get an error

sc.exe create buildMachine-collection2 binpath= "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\TfsBuildServiceHost.exe /NamedInstance:buildMachine-collection2" DisplayName= "Visual Studio Team Foundation Build Service Host (Collection2)"

  • I cannot stress enough how important it is give the new instances sensible names, especially as their numbers grow. Jim suggested naming after the TPC they service, for me this is bad move as at any given time were are working for a fairly small number of clients, but the list is changing as projects start and stop. It is therefore easier for me to name a controller for the machine is it hosted on as they will be reassigned between TPC based on need. So I settle on the names in the form ‘build1-collection2’ not a TPC base done. These are easy to associate with the VMs in use when you see them in VS2010
  • When I first tried to get this all up and ran the admin console for the command prompt I got the error shown below

image

After a bit of retyping this went away. I think it was down to stray spaces at end of SET variable, but not 100% sure over this. I would just make sure you strings match if you see this problem.

[Updated 26 Nov 2010] The batch file to start the management console is in the form

      set TFSBUILDSERVICEHOST=buildMachine-collection2 
      "C:\Program Files\Microsoft Team Foundation Server 2010\Tools\tfsmgmt.exe"

Make sure that you run this batch file as administration (right click run as admin) if you don't the management console picks up the default instance

  • Also it is a good Idea to go into the PCs service and make sure your new build service instance is set to auto start, to avoid surprises on a reboot.
  • When you configure the new instance make sure you alter the port it runs on (red box below) I am just incrementing it for each new instance e.g. 9191 –> 9192. If you don’t alter this the service will not start as it’s endpoint will already be in use.
  • Also remember to set the identity of the build service run as (green box), usually [Domain]\TFSBuild, too easy to forget as well as you click through the create dialogs.

image

Once this is set you can start the service and configure the controller and agent(s) exactly as normal.

You might want to consider how the workspace is mapped to the your multiple controllers, so you use different root directories, but that is your call. Thus far leaving it all as it was when I was using a separate VM for each build is working fine for me.

We shall see how many services I can put onto single VM, but it is certainly something I don’t want to push to hard. However that said if you are like use with a relatively low load on the build system this has to be worth looking at to avoid proliferation of build VMs.

Stupid mistake over Javascript parameters

I have been using the Google Maps JavaScript API today. I lost too much time over a really stupid error. I was trying to set the zoom level on a map using the call

map.setZoom(<number>);

I had set my initial zoom level to 5 (the scale is 1-17 I think) in the map load, when I called setZoom to 11 all was fine, but if I set it to any other number is reverted to 5. This different effect for different numbers was a real red herring. The problem was down to how I was handling the variable containing the zoom level prior to passing it to setZoom method. When it was set to 11 it was set explicitly e.g.

var zoomNumber = 11;

However when it was any other value it was being pulled from the value property of a combo box, so was actually a string. My problem was that setZoom does not return an error if if pass something in it does not understand, it just reverts to it’s initial value.

The solution was simple, cast the value to a string and it works as expected

map.setZoom(parseInt(ZoomNumber));

Problem faking multiple SPLists with Typemock Isolator in a single test

I have found a problem with repeated calls to indexed SharePoint Lists with Typemock Isolator 6.0.3. This what I am trying to do…

The Problem

I am using Typemock Isolator to allow me to develop a SharePoint Webpart outside of the SharePoint environment  (there is a video about this on the Typemock site). My SharePoint Webpart uses data drawn from a pair of SharePoint lists to draw a map using Google maps API; so in my test harness web site page I have the following code in the constructor that fakes out the two SPLists and populates them with test content.

 

   1: public partial class TestPage : System.Web.UI.Page
   2:  {
   3:     public TestPage()
   4:     {
   5:  
   6:        var fakeWeb = Isolate.Fake.Instance<SPWeb>();
   7:        Isolate.WhenCalled(() => SPControl.GetContextWeb(null)).WillReturn(fakeWeb);
   8:  
   9:        // return value for 1st call
  10:        Isolate.WhenCalled(() => fakeWeb.Lists["Centre Locations"].Items).WillReturnCollectionValuesOf(CreateCentreList());
  11:        // return value for all other calls
  12:        Isolate.WhenCalled(() => fakeWeb.Lists["Map Zoom Areas"].Items).WillReturnCollectionValuesOf(CreateZoomAreaList());
  13:     }
  14:  
  15:     private static List<SPListItem> CreateZoomAreaList()
  16:     {
  17:        var fakeZoomAreas = new List<SPListItem>();
  18:        fakeZoomAreas.Add(CreateZoomAreaSPListItem("London", 51.49275, -0.137722222, 2, 14));
  19:        return fakeZoomAreas;
  20:     }
  21:  
  22:     private static List<SPListItem> CreateCentreList()
  23:     {
  24:        var fakeSites = new List<SPListItem>();
  25:        fakeSites.Add(CreateCentreSPListItem("Aberdeen ", "1 The Road,  Aberdeen ", "Aberdeen@test.com", "www.Aberdeen.test.com", "1111", "2222", 57.13994444, -2.113333333));
  26:        fakeSites.Add(CreateCentreSPListItem("Altrincham ", "1 The Road,  Altrincham ", "Altrincham@test.com", "www.Altrincham.test.com", "3333", "4444", 53.38977778, -2.349916667));
  27:        return fakeSites;
  28:     }
  29:  
  30:     private static SPListItem CreateCentreSPListItem(string title, string address, string email, string url, string telephone, string fax, double lat, double lng)
  31:     {
  32:         var fakeItem = Isolate.Fake.Instance<SPListItem>();
  33:         Isolate.WhenCalled(() => fakeItem["Title"]).WillReturn(title);
  34:         Isolate.WhenCalled(() => fakeItem["Address"]).WillReturn(address);
  35:         Isolate.WhenCalled(() => fakeItem["Email Address"]).WillReturn(email);
  36:         Isolate.WhenCalled(() => fakeItem["Site URL"]).WillReturn(url);
  37:         Isolate.WhenCalled(() => fakeItem["Telephone"]).WillReturn(telephone);
  38:         Isolate.WhenCalled(() => fakeItem["Fax"]).WillReturn(fax);
  39:         Isolate.WhenCalled(() => fakeItem["Latitude"]).WillReturn(lat.ToString());
  40:         Isolate.WhenCalled(() => fakeItem["Longitude"]).WillReturn(lng.ToString());
  41:         return fakeItem;
  42:     }
  43:  
  44:     private static SPListItem CreateZoomAreaSPListItem(string areaName, double lat, double lng, double radius, int zoom)
  45:     {
  46:         var fakeItem = Isolate.Fake.Instance<SPListItem>();
  47:         Isolate.WhenCalled(() => fakeItem["Title"]).WillReturn(areaName);
  48:         Isolate.WhenCalled(() => fakeItem["Latitude"]).WillReturn(lat.ToString());
  49:         Isolate.WhenCalled(() => fakeItem["Longitude"]).WillReturn(lng.ToString());
  50:         Isolate.WhenCalled(() => fakeItem["Radius"]).WillReturn(radius.ToString());
  51:         Isolate.WhenCalled(() => fakeItem["Zoom"]).WillReturn(zoom.ToString());
  52:         return fakeItem;
  53:     }
  54:  
  55: }
  56:  

The problem is that if I place the following logic in my Webpart

   1: SPWeb web = SPControl.GetContextWeb(Context);
   2: Debug.WriteLine (web.Lists["Centre Locations"].Items.Count);
   3: Debug.WriteLine (web.Lists["Map Zoom Areas"].Items.Count);

I would expect this code to return

2
1

But I get

1
1

If I reverse two Isolate.WhenCalled lines in the constructor I get

2
2

So basically only the last Isolate.WhenCalled is being used, this is not what I expect from the Typemock documentation. .This states that, worst case, the first Isolate.WhenCalled should be used for the first call and the second for all subsequent calls, and actually the index string should be used to differentiate anyway. This is obviously not working. I actually also tried using null in place of the both the index strings and got the same result.

A Workaround

I have managed to workaround this problem with a refactor of my code. In my web part I used to moved all the SPList logic into a pair of methods

   1: private List<GISPoint> LoadFixedMarkersFromSharepoint(SPWeb web, string listName)
   2: {
   3:     var points = new List<GISPoint>();
   4:  
   5:     foreach (SPListItem listItem in web.Lists[listName].Items)
   6:     {
   7:             points.Add(new GISPoint(
   8:                 listItem["title"], 
   9:                 listItem["address"], 
  10:                 listItem["email addess"], 
  11:                 listItem["site Url"], 
  12:                 listItem["telephone"], 
  13:                 listItem["fax"], 
  14:                 listItem["latitude"], 
  15:                 listItem["longitude"]));
  16:     }
  17:     return points;
  18: }
  19:  
  20: private List<ZoomArea> LoadZoomAreasFromSharepoint(SPWeb web, string listName)
  21: {
  22:          var points = new List<ZoomArea>();
  23:  
  24:          foreach (SPListItem listItem in web.Lists[listName].Items)
  25:          {
  26:            points.Add(new ZoomArea(
  27:                 listItem["title"],
  28:                 listItem["latitude"], 
  29:                 listItem["longitude"], 
  30:                 listItem["radius"], 
  31:                 listItem["zoom"]));
  32:          }
  33:          return points;
  34: }
  35:   

I then used Isolator to intercept the calls to these methods, this can be done by using the Members.CallOriginal flag to wrapper the actual class and intercept the calls to the private methods. Note that I am using different helper methods to create the list of my own data objects as opposed to List<SPListItems>

   1: var controlWrapper = Isolate.Fake.Instance<LocationMap>(Members.CallOriginal);
   2: Isolate.Swap.NextInstance<LocationMap>().With(controlWrapper);
   3:  
   4: Isolate.NonPublic.WhenCalled(controlWrapper, "LoadFixedMarkersFromSharepoint").WillReturn(CreateCentreListAsGISPoint());
   5: Isolate.NonPublic.WhenCalled(controlWrapper, "LoadZoomAreasFromSharepoint").WillReturn(CreateZoomAreaListAsZoomItems());
   6:   

My workaround, in my opinion, is a weaker test as I am not testing my conversion of SPListItems to my internal data types, but at least it works

I have had to go down this route due to a bug in Typemock Isolator (which has been logged and recreated by Typemock, so I am sure we can expect a fix soon). However it does show how powerful Isolator can be when you have restrictions in the changes you can make to a code base.Wrapping a class with Isolator can upon up a whole range of options.

What is an .xesc file?

Test Professional, after the Lab Management update, now uses Expression Encoder 4.0 to create it video of screen activity. This means that when you run a test and record a video you end up with an attachment called ScreenCapture.xesc.

Now my PC did not have the Expression Encoder 4.0 installed, so did not know what to do with an .xesc file created within our Lab Management environment. To address this the answer is simple. On any PC that might want to view the video either:

  1. Install the Expression Encoder 4 
  2. or install just the Screen Capture Code

Once either of these is done, Media Player can play the .xesc file.

Cannot run CodeUI tests in Lab Management – getting a ’Build directory of the test run is not specified or does not exist’

Interesting user too stupid error today whist adding some CodeUI tests to a Lab Management deployment scenario.

I added the Test Case and associated it with Coded UI test in Visual Studio

image

I made sure my deployment build had the tests selected

image

I then ran my Lab Deployment build, but got the error

Build directory of the test run is not specified or does not exist.

This normally means the test VM cannot see the share containing the build. I checked the agent login on the test VM could view the drop location, that was OK, but when I looked for the assembly containing my coded UI tests was just not there.

Then I remembered……..

The Lab build can take loads of snapshots and do a sub-build of the actual product. This all very good for production scenarios, but when you are learning about Lab Management or debugging scripts it can be really slow. To speed up the process I had told my Deploy build to not take snapshots and the use the last compile/build drop it could find. I had just forgotten to rebuild my application on the build server after I had added the coded UI tests. So I rebuild that and tried again, but I got the same problem.

It turns out that though I was missing the assembly the error was before it was required. The true real error was not who the various agents were running as, but the account the test controller was running as. The key was to check the test run log. This can be accessed from the Test Run results (I seemed to have a blind spot looking for these result)

image

This showed problem, I had selected the default ‘Network Service’ account for the test controller and had not granted it rights to the drop location.

image

I changed the account to my tfs210lab account as used by the agents and all was OK.

image

Don’t hardcode that build option

I have been using the ExternalTestRunner 2010 Build activity I wrote. I realised that at least one of the parameters I need to set, the ProjectCollection used to publish the test results, was hard coded in my sample. It was set in the form

http://myserver:8080/tfs/MyCollection

This is not that sensible, as this value is available using the build API as

BuildDetail.BuildServer.TeamProjectCollection.Uri.ToString()

It makes no sense to hard code the name of the server if the build system already knows it.

This simple change means that the build templates can be fair easier past between Team Projects Collections

&quot;Program too big to fit in memory&quot; when installing a TFS 2010 Test Controller

Just spent a while battling a problem whilst install the TFS 2010 Test Controller. When I launched the install setup program off the .ISO  I could select the Test Controller installer, but then a command prompt flashed up and exited with no obvious error. If I went into the TestControllers directory on the mounted .ISO and ran the setup from a command prompt I saw the error "program too big to fit in memory".

As the box I was trying to use only had 1Gb of memory (below the recommended minimum), I upped it to 2Gb and then to 4Gb but still got the same error.

Turns out the problem was a corrupt .ISO once I had downloaded it again, and dropped by target VM to 2Gb of memory all was fine.

Getting a ,NET 4 WCF service running on IIS7

Now I know this should be simple and obvious but I had a few problems today publishing a web service to a new IIS7 host. These are the steps I had to follow to get around all my errors:

  1. Take a patched Windows Server 2008 R2, this had the File Server and IIS roles installed.
  2. I install MSDeploy (http://blog.iis.net/msdeploy) onto the server to manage my deployment, this is a tool I am becoming a big fan of lately.
  3. Make sure the MS Deploy service has started, it doesn’t by default.
  4. In IIS manager
    1. Create a new AppPool (I needed to set it to .NET 4 for my application)
    2. Create a new Web Site, pointing at the new AppPool
  5. In Visual Studio 2010 create an MSDeploy profile to send to the new server and web site. This deployed OK
  6. AND THIS IS WHERE THE PROBLEMS STARTED
  7. When I browsed to my WCF webservice e.g.http://mysite:8080/myservice.svc whilst on the server I got a ‘500.24 Integrated Pipeline Issue’ error. This was fixed by swapping my AppPool’s pipeline mode to Classic, as I did need to use impersonation for this service.
  8. Next I got a ‘404.3 Not Found’ error. This was because the WCF Activation feature was not installed in the box. This is added via Server 2008 : Server Manager -> Add Features-> .Net Framework 3.x Features -> WCF Activation
  9. Next it was a ‘404.17 Not Found Static Handler’. If I looked in IIS Manager, Feature View, Handler Mapping I only saw mention on 2.0 versions of files. So I reran aspnet_iisreg /i from the 4.0 framework directory and both 2.0 and 4.0 versions were shown in the Handler list
  10. Next it was a ‘404.2 Not Found. Description: The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server’. In the IIS Manager at the server level I had to enable the .NET 4 handlers in ISAPI and CGI restriction section
  11. I could then get to see the WSDL for the WCF service
  12. And finally I had to open port 8080 on the box to allow my clients to see it.

Now that was straight forward wasn't;t it.

Getting code coverage working on Team Build 2010

If you have VS2010 Premium or Ultimate [Professional corrected error in orginal post]  you have code coverage built into the test system. When you look at your test results there is a button to see the code coverage

image

You would think that there is easy way to use the code coverage in your automated build process using Team Build 2010, well it can done but you have to do a bit of work.

What’s on the build box?

Firstly if your build PC has only an operating system and the Team Build Agent (with or without the Build Controller service) then stop here. This is enough to build many things but not to get code coverage. The only way to get code coverage to work is to have VS2010 Premium or Ultimate also installed on the build box.

Now there is some confusion in blog posts over if you install the Visual Studio 2010 Test Agents do you get code coverage, the answer for our purposes is no. The agents will allow remote code coverage in a Lab Environment via a Test Controller, but they do not provide the bits needs to allow code coverage to be run locally during a build/unit test cycle.

Do I have a .TestSettings file?

Code Coverage is managed using your solution’s .TestSetting file. My project did not have one of these, so I had to ‘add new item’ it via add on a right click in the solution items.

The reason I had no .TestSettings file was because I started with an empty solution and added projects to it, if you start with a project, such as a web application, and let the solution be created for you automatically then there should be a .TestSettings file created.

In the test settings you need to look at the Data & Diagnostics tab and enable code coverage and then press the configure button, this is important.

image

On the configuration dialog will see a list of your projects and assemblies. In my case initially I only saw the first and the last rows in the graphic below. I selected the first row, the project containing my production code and tried a build.

THIS DID NOT WORK – I had to added the actual production assembly as opposed to the web site project (the middle row shown below). I think this was the key step to getting it going.

The error I got before I did this was Empty results generated: none of the instrumented binary was used. Look at test run details for any instrumentation problems.  So if you see this message in the build report check what assemblies are flagged for code coverage.

image

Does my build definition know about the .TestSettings file?

You now need to make sure that build knows the .TestSettings file exists. Again this should be done automatically when you create a build (if the file exists), but on my build I had to add it manually as I created the file after the build.

image

 

 

 

So when all this is done you get to see a build with test results and code coverage.

image

Easy wasn’t it!

All Leds flashing on a Netgear GS108 Switch

I came back form holiday to find a Netgear GS108 switch with all it leds flashing and it passing no data. This is exactly the same symptoms as this post. My fix did not involve the use of a soldering iron as detailed in the post, I just swapped the PSU and it started working fine. I have seen this before, the PSU shows it’s age before the device itself. Good job I have a big box of misc PSU from devices down the years

Update 22 July 2010: I spoke too soon, came in today to the same problem. Time to swap the capacitors