But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

TF900546 error on a TFS 2012 build

Whilst moving over to our new TFS 2012 installation I got the following error when a build tried to run tests

TF900546: An unexpected error occurred while running the RunTests activity: 'Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.'.

This was a new one on me, and nothing of much use on the web other than a basic MSDN page.

Turns out the immediate fix is to just restart the build controller. Initially I did this after switching to the default build process template, and setting it to NOT load any custom activities, but I seems a simple restart would have been enough as once I re-enabled all custom activities it still worked.

As to the root cause I have no idea, one to keep an eye on, especially as I am currently on the RC, lets see what the RTM build does.

Getting Typemock Isolator running within a TFS 2012 build

Update 23rd Aug 2012: This blog post was produced testing against the 2012RC, seems it does not work against the 2012 RTM release. I am seeing the error in my build logs

 TF900546: An unexpected error occurred while running the RunTests activity: 'Executor process exited.

I am looking into this, expect to see a new blog post soon

Update 24th Aug 2012: I have fixed the issues with TFS2012RTM, the links to zip containing a working version of the activity in this post should now work. For more details see my follow up part 2 post

I have posted in the past about getting Typemock Isolator to function within the TFS build process. In TFS 2008 it was easy, you just ran a couple of MSBUILD tasks that started/stopped the Typemock Isolator inception process (the bit that does the magic other mocking frameworks cannot do). However with TFS 2010’s move to a windows workflow based build model it became more difficult. This was due to the parallel processing nature of the 2010 build process, running a single task to enable interception cannot be guaranteed to occur in the correct thread (or maybe even on the correct build agent). So I wrote wrapper build activity for MStest to get around this problem. Howerver, with the release of Typemock Isolator 6.2 direct support for TFS 2010 was added and these TFS build activities have been refined in later releases. In the current beta (7.0.8) you get a pre-created TFS build process template to get you going and some great auto deploy features, but more of that later.

The problem was I wanted to put Isolator based tests within a TFS 2012 build process. I posted before about my initial thoughts on the problem. The main problem is that TFS build activities have to be built against the correct version of the TFS API assemblies (this is the reason the community custom activities have two sets of DLLs in the release ZIP file). So out the box you can’t use the Typemock.TFS2010.DLL with TFS 2012 as it is built against the 2010 API.

Also you cannot just use the Typemock provided sample build process template. This is built against 2010 too, so full of 2010 activities which all fail.

What I tried that did not work (so don’t waste your time)

So I took a copy of the default TFS 2012 build process template and followed the process to add the Typemock.TFS2010.DLL containing the Typemock activities to the Visual Studio 2012 toolbox (the community activity documentation provides a good overview of this strangely complex process also see the ALM Rangers guidance). I then added the TypemockRegister and TypemockStart activities at the start of the testing block. For initial tests I did not both adding the TypemockStop activity

image

I then made sure that

  • Typemock was installed on the build agent PC
  • The Typemock.TFS2010.dll was in the correct CustomActivities folder in source control
  • The build controller was set to load activities from the CustomActivities folder.

However, when I tried to queue this build I got an error 

Exception Message: Object reference not set to an instance of an object. (type NullReferenceException)
Exception Stack Trace:    at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()

image 

The issue was that though Typemock was installed, the required DLLs could not be found. Checking in a bit more detailed (by running the build with diagnostic level of logging and using Fuslogvw) I saw it was trying load the wrong versions of DLLs as expected. So the first thing I tried to use was binding redirection (a technique I used before with similar Typemock). This in effect told the Typemock activity to use the 2012 DLLs when it asks for the 2010 ones. This is done by using an XML config file (Typemock.TFS2010.DLL.config)  in the same folder as the DLL file.

<configuration>
   <runtime>
      <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
       <dependentAssembly>
         <assemblyIdentity name="Microsoft.TeamFoundation.Build.Workflow"
                           publicKeyToken="b03f5f7f11d50a3a"
                           culture="neutral" />
         <bindingRedirect oldVersion="10.0.0.0"
                          newVersion="11.0.0.0"/>
       </dependentAssembly>
       <dependentAssembly>
         <assemblyIdentity name="Microsoft.TeamFoundation.Build.Client"
                           publicKeyToken="b03f5f7f11d50a3a"
                           culture="neutral" />
         <bindingRedirect oldVersion="10.0.0.0"
                          newVersion="11.0.0.0"/>
       </dependentAssembly>
          <publisherPolicy apply="no">
      </assemblyBinding>bu
   </runtime>
</configuration>

I first tried to add this file to the CustomActivities source control folder, where the custom activities are loaded from by the build agent, but that did not work. I could only get it to work if I put both the DLL and the config files in the  C:\Program Files\Microsoft team Foundation Server 1.0\Tools folder on the build agent. This is not a way I like to work, too messy having to fiddle with the build agent file system.

Once this setting was made I tried a build again and got the build process to load, but the TypemockRegister activity failed as the Typemock settings argument was not set. Strangely Typemock have chosen to pass in their parameters as a complex type (of the type TypemockSettings) as opposed to four strings. Also you would expect this argument to be passed directly into their custom activities by getting activity properties to argument values, but this is not how it is done. The Typemock activities know to look directly for an argument called Typemock. This does make adding the activities easier, but not obvious if you are not expecting it. So I added this argument to the build definition in Visual Studio 2012 and checked it in, but when I tried to set the argument value for a specific build it gave the error that the DLL containing the type Typemock.TFS2010.TypemockSettings could not be loaded, again the TFS 2010/2012 API issue, this time within Visual  Studio 2012

 

clip_image002

 

At this point I gave up on binding redirection, I had wasted a lot more time than this post makes it sound. So I removed all the work I had previously done and thought again.

What did work

I decided that the only sensible option was to recreate the functionality of the Typemock activity against the 2012 API. So I used Telerik JustDecompile to open up the Typemock.Tfs2010.dll assembly and had a look inside. In Visual Studio 2012 I then created a new C# class library project called Typemock.BM.TFS20102 targeting .NET 4. I then basically cut and pasted the classes read from JustDecompile into classes of the same name in the new project. I then added references to the TFS 2012 API assemblies and any other assemblies needed and compiled the project. The one class I had problems with the TypemockStart, specifically the unpacking of the properties in the InternalExecute method. The reflected code by JustDecompile was full of what looked to be duplicated array copying which did not compile. So I simplified this to map the properties to the right names.

You can download a copy of my Typemock.BM.TFS2012.Dll from here, so you don’t have to go through the process yourself.

I now had a TFS 2012 custom build activity. I took this new activity and put it the CustomActivities folder. Next I took an unedited version of the default 2012 build process template and added these new Typemockregister, TypemockStart (at the start of the test block) and TypemockStop (at the end of the test block) activities as well as a Typemock argument (of TypemockSettings type). I checked this new template into TFS, and then created a build setting the Typemock argument settings.

 

image

Now at this point it is worth mentioning the nice feature of AutoDeploy. This allows you to use Typemock without having it installed on the build agent, thus making build agent management easier. You copy the AutoDeploy folder from the Typemock installation folder into source control (though a rename might be sensible so you remember it is for Typemock auto deployment and not anything else). You can then set the four argument properties

  • The location of the auto deployment folder in source control
  • A switch to enable auto deployment
  • Your Typemock license settings.

By using the auto deployment feature I was able to uninstall Typemock on the build agent.

So I tried a build using these setting, all the build activities loaded Ok and the TypemockSettings was read, but my project compile failed. As I had uninstalled Typemock on my build agent all the references to Typemock assemblies in the GAC failed. These references were fine on a development PC which had Typemock installed not on the build agent which did not.

So I needed to point the references in my project to another location. Typemock have thought of this too and provide a tools to remap the references that you can find on the Typemock menu

image

You can use this tool, or do it manually.

You could re-point the references to the same location you used for the AutoDeploy feature. However I prefer to keep my project references separate to my infrastructure (build activities etc.) as I use the same build templates cross project. For our projects we arrange source control so we have the structure in the general form (ignoring branch for simplicity)

/$
A team project
      BuildProcessTemplate
             CustomActivities
             AutoDeploy
      MyProject-1
             Src 
                   Solution1A.sln
             Lib 
                   [Nuget packages]
                   AutoDeploy
                   other assemblies
      MyProject-2
             Src 
                   Solution2a.sln
             Lib 
                   [Nuget packages]
                   AutoDeploy
                   other assemblies

I make sure we put all assemblies referenced in the lib folder, including those from Nuget using a nuget.config file in the src folder with the SLN file e.g.

<settings>
  <repositoryPath>..\Lib\</repositoryPath>
</settings>

This structure might not be to your taste, but I like it as it means all projects are independent, and so is the build process. The downside is you have to manage the references for the projects and build separately, but I see this as good practice. You probably don’t share want to reference and Nuget packages between separate projects/solutions.

So now we have a 2012 build process that can start Typemock Isolator, and a sample project that contains Typemock based tests, some using MSTest and some using XUnit (remember Visual Studio 2012 support multiple unit testing frameworks not just MSTest, see here on how to set this up for TFS build). When the build is run I can see all my unit tests pass to Typemock isolator must be starting correctly

 

image

So for me this is a reasonable work around until Typemock ship a TFS 2012 specific version. Hope this file saves you some time if you use Typemock and TFS 2012.

Two problems editing TFS2012 build workflows with the same solution

Updated 30th Aug 2012: This post is specific to TFS/VS 2012 RC - for details on the RTM see this updated post

Whilst moving over to our new TFS2012 system I have been editing build templates, pulling the best bits from the selection of templates we used in 2010 into one master build process to be used for most future projects. Doing this I have hit a couple of problems, turns out the cure is the same for both

Problem 1 : When adding custom activities to the toolbox Visual Studio crashes

See the community activities documentation for the process to add a items to the toolbox, when you get to step to browse for the custom assembly you get a crash.

clip_image002

Problem 2: When editing a process template in any way the process is corrupted and the build fails

When the build runs you get the error (amongst others)

The build process failed validation. Details:
Validation Error: The private implementation of activity '1: DynamicActivity' has the following validation error:   Compiler error(s) encountered processing expression "BuildDetail.BuildNumber".
Type 'IBuildDetail' is not defined.

 

image

 

 

The Solution

Turns out the issue that caused both these problems was that the Visual Studio class library project I was using to host the XAML workflow for editing was targeting .NET 4.5, the default for VS2012. I changed the project to target .NET 4.0, rolled back the XAML file back to an unedited version and reapplied my changes and all was OK.

Yes I know it is strange, as you never build the containing project, but the targeted .NET version is passed around VS for building lists and the like, hence the problem.

Problems I had to address when setting up TFS 2012 Lab Environments with existing Hyper-V VMs

Whilst moving all our older test Hyper-V VMs into a new TFS 2012 Lab Management instance I have had to address a few problems. I already posted about the main one of cross domain communications. This post aims to list the other workaround I have used.

MTM can’t communicate with the VMs

When setting up an environment that includes existing VMs it is vital that the PC running MTM |(Lab Center) can communicate with all the VMs involved. The best indication I have found that you will not have problems is to use a simple Ping. If you are creating a SCVMM environment you need to be able to Ping the fully qualified machine name as it has been picked up by Hyper-V e.g: server1.test.local. If creating a standard environment you only need to be able to Ping the name you specify for the machine e.g: server1 or maybe server.corp.com.

If Ping fails then you can be sure that the MTM create environment verify step will also fail. The most likely reasons both are failing are

  • There are DNS issues, the VM names are missing, leases have expired, they are not in the domains expected or are just plain wrong. I found the best solution for me is to edit the local hosts file on the PC running MTM. Just add the name and fully qualified name as well as the correct IP address. You should then be able Ping the VM (unless there is a firewall issue, see below). The host file is only needed on the MTM PC whist the environment is created, once the environment is setup the hosts file is not needed.
  • File and print sharing needs to be opened through the firewall on the VM (control panel > firewall > allow applications through firewall)
  • Missing/out of date Hyper-V extensions on the VM. This only matters if it is a SCVMM environment being created as this is how the fully qualified is found. This is best spotted in MTM as you get a error on the Machine properties tab. The fix is to reinstall the extensions via the Hyper-V Manager  (Actions > Insert Integration Services Disk, and maybe run the setup on the VM if it does not start)

Can’t see a running VM in the list of available VMs

When composing an environment from running VMs one problem I had was that though a VM was running it did not appear in the list in MTM. This turned out to be due to the fact that the VM had meta data associating it with an different environment (in my case a dating back to our TFS2010 instance).

This is easy to fix, in SCVMM or Hyper-V Manager open the VM settings and make sure the name/ note field (red box below) is empty.

 image

Once the settings are saved you will have to wait a little while before SCVMM picks up the changes and lets you copy of MTM know the VM is available.

Getting TFS 2012 Agents to communicate cross domain

I don’t know about your systems but historically we have had VMs running in test domains that are connected to our corporate LAN. Thus allowing our staff and external testers to access them from their development PCs or through our firewall after providing suitable test domain credentials. These test setups are great candidates for the new TFS Lab Management 2012 feature Standard environments. It does not matter if they are hosted as physical devices, or on Hyper-V or VMware.

However, the use of separate domains raises issues of cross domain authentication, irrespective of the virtualisation technology. It is always a potentially confusing area. If we want the ability to use the deployment and testing features of Lab Management, what we need to achieve is Test Agents on each VM, that talks to a Test Controller which is registered to a TFS Team Project Collection. Not too easy when spread across multiple domains.

With TSF2012 the whole process of getting agents to talk to their controller was greatly eased. Lab Management does it for you much of the time if you provide it with a corp\tfslab domain account who is a member of the Project collection test service accounts group in TFS.

The summary of the scenarios is as follows

Scenario How to achieve it
If your test VMs are in either a SCVMM managed or standard environment but are joined to your corp domain Lab Management wires it all up automatically using your corp\tfslab account
If your test VMs are in either a SCVMM managed or standard environment that is not domain joined i.e: just in a workgroup Lab Management wires it all up automatically using your corp\tfslab account
If your test VMs are in a SCVMM managed network isolated environment Lab Management wires it all up automatically using your corp\tfslab account
If your test VMs are in either a SCVMM managed (not network isolated) or standard environment and are in their own test domain You have to do some work

If like me you end up with the fourth scenario, the key is to provide a test controller within the test domain. This must be configured to talk back to TFS on the corp domain. This can all done with local machine accounts on the test controller and TFS server with matching names and passwords, what I think of as shadow accounts.

So for example, we have the following scenario of a corp domain with a DC and various TFS servers and controllers and a test domain containing three servers.

 

image

So the process to get the test agents on the test domain talking to TFS on the corp domain is as follows:

  1. On the TFS server (called tfsserver.corp.com in above graphic)
    1. Open the Control Panel > Computer Manager and create a new local user called tfslabshadow. Set the password and that the user does not need to change it on first login and that it does not expire
    2. In the TFS administration console add the new user tfsserver\tfslabshadow to the Project collection test service accounts group
  2. On a machine (called server.test.local in above graphic) within the test domain (this cab be any VM in the domain running Windows other than the DC)
    1. Open the Control Panel > Computer Manager and create a new local user called tfslabshadow with the same password as on the same account on the tfsserver
    2. Add this user to the local administrators group for that server.
    3. Login as this user
    4. Install the Visual Studio 2012 Test controller
    5. When the installation is complete the configuration tool will launch. Set the service to run as the tfslabshadow and register it to connect to the TFS server with this account too.
      Note - When you first load the configuration tool you need to browse for the TFS server and enter its URL. If you have your shadow accounts working correctly you should not need to enter any other credentials at this point.
      Note - You can enter the local user name in either the .\tfslabshadow or server\tfslabshadow format

      image
    6. If you have all the settings correct then you should be able to apply the changes without any errors and the new test controller should be registered. If you get any errors they usually are fairly clear at this point when you look in the log, you probably forgot to place a user in some group somewhere.
  3. From a PC running Test Manager 2012 (MTM) on the corp domain
    1. Go into the Lab Center
    2. Create a new environment (can be SCVMM or Standard) containing the machines in the test domain (or open an existing environment if you have one that was not correctly configured)
    3. On the Advanced tab you should be able to select the new test controller server that is hosted within the test domain
    4. You can make any other setting changes you require (remember on the machines tab to enter the test domain login credentials, they will have defaulted to your current ones). When you are done you can select Verify. I had problem here due to DNS entries. From the PC running MTM I could ping server, but MTM was trying to communicate using the name server.test.local. To get around this I added an entry in my local host files. I have also a seen VMs that are not registered in DNS at all, again a local hosts file fixes the problem. This is only required for the initial verification and deployment/configuration once this is done the host entries can be removed if you want.
    5. Once verification has passed save the changes and after a short wait the environment should finish configuring itself showing no errors

So I hope I have provided a step by step to help you get around issues with cross domain testing in Lab Management. However, it is still important to remember the exceptions

  1. As we are using local machine accounts you cannot have the TFS server or the Test controller running on a domain controller (as a DC cannot have local machine accounts). If your environment is a single box that is a DC then you either have to setup a cross domain two way trust between test and corp or rebuild the environment as a workgroup or network isolated environment.
  2. The shadow account cannot have the same name as the corp\tfslab account i.e: tfslab. If you try to use the same name for the local machine and domain accounts the matching of the two local machine accounts will fails as on the TFS server end it will not be able to decide whether to use corp\tfslab or tfsserver\rfslab

For more details on this general area see MSDN

_atomic_fetch_sub_4 error running VS2012RC after Office 2013 Customer Preview is installed

When I installed Office 2013 customer preview all seemed good, loads of new metro look Office features. However when I tried to load my previously working Visual Studio 2012RC I got the error

“The procedure entry point _atomic_fetch_sub_4 could not be located in the dynamic link library devenv.exe”.

 

image

This is a known issues with the C++ runtime and a patch was release last week, install this and all should be OK

TFS build service cannot connect to a TFS 2012 server - seeing EventID 206 MessageQueue in error log

Whilst setting up our new TFS 2012 instance I had a problem getting the build box to connect to the TFS server.

  1. When I started the build service (on a dedicated VM configured as a controller and single agent, connected to a TPC on the server on another VM). All appears OK, the controller and agents said they were running and state icons went green
  2. About 5 seconds later the state icons go red, but message says the controller and agents are still running, from past experience I know this means it is all dead.
  3. On the build service section a new ‘details’ link appears, but if you try to click it get a 404 error (see below)

image

In the windows event log (TFS/Build-Service/Operational section) I got the error

Build machine build2012 lost connectivity to message queue tfsmq://buildservicehost-2/.
Reason: HTTP code 404: Not Found

It is recorded as EventID 206  in the category MessageQueue

I tried reinstalling the build VM and checked the firewalls on the build VM and the TFS Server VM, all to no effect.

The issue turned out to be that the TFS URL I had used. I had used a TFS URL on the build service VM to connect to the TFS server that used HTTPS/SSL. As soon as I changed it to an HTTP URL the build service started to work. This was OK for me as the build VM and server VM were in the same machine room, so I did not really need SLL. I had just used it out of habit as this is what our developer PCs use.

However, if you did want to keep using SSL you need to do the following

  • Open the following configuration file: C:\Program Files\Microsoft Team Foundation Server 2012\Application Tier\Message Queue\web.config
  • Find a section like the bindings section below
  • Alter httpTransport to say httpsTransport

    <bindings>
      <customBinding>
        <binding name="TfsSoapBinding">
          <textMessageEncoding messageVersion="Soap12WSAddressing10" />
          <httpTransport authenticationScheme="Ntlm" manualAddressing="true" />
          <httpsTransport authenticationScheme="Ntlm" manualAddressing="true" />
        </binding>
      </customBinding>
    </bindings>

  • Save the file
  • Recycle the IIS app pool
  • Restart the build service on the build VM

Thanks to Patrick on the TFS team for helping me get to the bottom of this.

Where did my Visual Studio 2010 link go from my Windows 8 desktop after I installed SSDT?

I have been finding the ‘hit the windows key and type’ means to launch desktop applications in Windows 8 quite nice. It means I get used to the same behaviour in Windows or Ubuntu to launch things, no need to remember menu locations just type a name, all very slickrun . However, I hit a problem today, I hit the windows key, typed Visual and expected to see Visual Studio 2012 and 2010, but I only saw Visual Studio 2012

image

But both were there yesterday!

The issue was I had install SSDT (SQL Server Data Tools), this is hosted within the Visual Studio 2010 shell and had renamed my Metro desktop Visual Studio 2010 App to Microsoft SQL Server Data Tools. If I typed this the app was found and it launched Visual Studio 2010. I then could choose whether to use SSDT or Ultimate features as you would expect. This is the same behaviour as on Windows 7, it is just on Windows 7 you would have two menu items, one for SSDT and one for VS2010 both pointing to the same place.

Now I am a creature of habit, even if it is a newly formed one, and I like to just type Vis, so this is how I got the link back into the Metro desktop, might be other ways but this is the one that worked for me

  1. Found the Visual Studio 2010 devenv.exe file in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE
  2. Right click and create a shortcut on the desktop
  3. Rename the new desktop shortcut to ‘Visual Studio 2010’
  4. Right clicked on the renamed shortcut and selected ‘pin to start’
  5. Deleted the desktop shortcut, it is no longer needed as it has been copied, yes I found this a bit strange too, but I do like a clean desktop so delete it I did. You don’t have to delete it if you want a desktop shortcut.

I can now press the Windows key, type Vis and I can see both Vs 2010 and 2012, and I can still type SQL and get to SSDT

image