But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Could not load file or assembly 'Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0’ when running a build on a new build agent on TFS 2013.2

I am currently rebuilding our TFS build infrastructure, we have too many build agents that are just too different, they don’t need to be. So I am looking at a standard set of features on a build agent and the ability to auto provision new instances to make scaling easier. More on this in a future post…

Anyway whilst testing a new agent I had a problem. A build that had worked on a previous test agent failed with the error

Could not load file or assembly 'Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

The log showed it was failing to even do a get latest of the files to build, or anything on the build agent.

image

 

Turns out the issue was the PowerShell script that installed all the TFS build components and SDKs had failed when trying to install the Azure SDK for VS2013, the Web Deploy Platform was not installed so when it tried to use the command line installer to add this package it failed.

I fixed the issues for the Web PI tools and re-ran the command line to installed the Azure SDK and all was OK.

Not sure why this happened, maybe a missing pre-req put on by Web PI itself was the issue. I know older versions did have a .NET 3.5 dependency. Once to keep an eye on

MSBuild targeting a project in a solution folder

Whilst working on an automated build where I needed to target a specific project I hit a problem. I would normally expect the MSBuild argument to be

/t:MyProject:Build

Where I want to build the project Myproject in my solution and perform the Build target (which is probably the default anyway).

However, my project was in a solution folder. The documentation says the you should be able to use for form

/t:TheSolutionFolder\MyProject:Build

but I kept getting the error the project did not exist.

Once I changed to

/t:TheSolutionFolder\MyProject

it worked, the default build target was run, which was OK as this was Build the one I wanted.

Not sure why occurred, maybe I should steer clear of solution folders?

Building Azure Cloud Applications on TFS

If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.

On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the $(build agent working folder)\source\myproject\bin\debug folder. Unlike the output of the build target which puts them in the $(build agent working folder)\binaries\ folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.

I have battled to sort this for a while, trying to avoid the need to edit our customised TFS build process template. This is something we try to avoid where possible, favouring environment variables and MSbuild arguments where we can get away with it. There is no point denying that editing build process templates is a pain point on TFS.

The solution – editing the process template

Turns out a colleague had fixed the same problem a few projects ago and the functionality was already hidden in our standard TFS build process template. The problem was it was not documented; a lesson for all of us, that it is a very good idea to put customisation information in a searchable location so others find customisations that are not immediate obvious. Frankly this is one of the main purposes of this blog, somewhere I can find what I did that years, as I won’t remember the details.

Anyway the key is to make sure the publish target for the MSBbuild uses the correct location to create the files. This is done using a pair of MSBuild arguments in the advanced section of the build configuration

  • /t:MyCloudApp:Publish -  this tells MSbuild to perform the publish action for just the project MyCloudApp. You might be able to just go /t:Publish if only one project in your solution has a Publish target
  • /p:PublishDir=$(OutDir) - this is the magic. We pass in the temporary variable $(OutDir). At this point we don’t know the target binary location as it is build agent/instance specific, customisation in the TFS build process template converts this temporary value to the correct path.

In the build process template in the Initialize Variable sequence within Run on Agent add a If Activity.

image

  • Set the condition to MSBuildArguments.Contains(“$(OutDir)”)
  • Within the true branch add an Assignment activity for the MSBuildArguments variable to MSBuildArguments.Replace(“$(OutDir)”, String.Format(“{0}\{1}\\”, BinariesDirectory, “Packages”))

This will swap the $(OutDir) for the correct TFS binaries location within that build.

After that it all just works as expected. The CSPKG file etc. ends up in the drops location.

Other things that did not work (prior to TFS 2013)

I had also looked a running a PowerShell script at the end of the build process or adding an AfterPublish target within the MSBuild process (by added it to the project file manually) that did a file copy. Both these methods suffered the problem that when the MSBuild command ran it did not know the location to drop the files into. Hence the need for the customisation above.

Now I should point out that though we are running TFS 2013 this project was targeting the TFS 2012 build tools, so I had to use the solution outlined above, a process template edit. However, if we had been using the TFS 2013 process template as our base for customisation then we would have had another way to get around the problem.

TFS 2013 exposes the current build settings as environment variables. This would allow us to use a AfterPublish MSBuild Target something like

<Target Name="CustomPostPublishActions" AfterTargets="AfterPublish" Condition="'$(TF_BUILD_DROPLOCATION)' != ''">
  <Exec Command="echo Post-PUBLISH event: Copying published files to: $(TF_BUILD_DROPLOCATION)" />
  <Exec Command="xcopy &quot;$(ProjectDir)bin\$(ConfigurationName)\app.publish&quot; &quot;$(TF_BUILD_DROPLOCATION)\app.publish&quot; /y " />
</Target>

So maybe a simpler option for the future?

The moral of the story document your customisations and let your whole team know they exist

Interesting license change for VS Online for ‘Stakeholder’ users

All teams have  ‘Stakeholder’, the people the are driving a project forward, who want the new system to be able to do their job; but are often not directly involved in the production/testing of the system. In the past this has been an awkward group to provide TFS access for. If they want to see any detail of the project they need a TFS CAL, expensive for the occasional casual viewer.

Last week Brian Harry announced there would be a licensing change in VSO with a ‘Stakeholder’ license. It has limitations, but provides the key feature they will need

  • Full read/write/create on all work items
  • Create, run and save (to “My Queries”) work item queries
  • View project and team home pages
  • Access to the backlog, including add and update (but no ability to reprioritize the work)
  • Ability to receive work item alerts

The best news is that it will be a free license, so no monthly cost to have as many ‘Stakeholders’ on you VSO account.

Now most of my clients are using on-premise TFS, so this change does not effect them. However, the same post mentions that the “Work Item Web Access” TFS CAL exemption will be change in future releases of TFS to bring it in line with the ‘Stakeholder’.

So good new all round, making TFS adoption easier, adding more ways for clients to access their ALM information

How long is my TFS 2010 to 2013 upgrade going to take – Part 2

Back in January I did a post How long is my TFS 2010 to 2013 upgrade going to take? I have now done some more work with one of the clients and have more data. Specially the initial trial was 2010 > 2013 RTM on a single tier test VM; we have now done a test upgrade from 2010 > 2013.2 on the same VM and also one to a production quality dual tier system.

image

The key lessons are

  • There a 150 more steps to go from 2013 RTM to 2013.2, it takes a good deal longer.
  • The dual tier production hardware is nearly twice as fast to do the upgrade, though the initial steps (step 31, moving the source code) is not that much faster. It is the steps after this that are faster. We put it down to far better SQL throughput.

Cloning tfs repository with git-tf gives a "a server path must be absolute"

I am currently involved in moving some TFS TFVC hosted source to a TFS Git repository.  The first step was to clone the source for a team project from TFS using the command

git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection ‘$My Project’ localrepo1

and it worked fine. However the next project I tried to move had no space in the source path

git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection ‘$MyProject’ localrepo2

This gave the error

git-tf: A server path must be absolute.

Turns out if the problem was the single quotes. Remove these and the command worked as expected

git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection $MyProject localrepo2

Seems you should only use the quotes when there are spaces in a path name.

Updated 11 June – After a bit more thought I think I have tracked down the true cause. It is not actually the single quote, but the fact the command line had been cut and pasted from Word. This mean the quote was a ‘ not a '. Cutting and pasting from Word can always lead to similar problems, but it is still a strange error message, I would have expected an invalid character message

Our TFS Lab Management Infrastructure

After my session Techorama last week I have been asked some questions over how we built our TFS Lab Management infrastructure. Well here is a bit more detail, thanks to Rik for helping correcting what I had misremembered and providing much of the detail.

image

For SQL we have two physical servers with Intel processors. Each has a pair of mirrored disks for the OS and RAID5 group of disks for data. We use SQL 2012 Enterprise Always On for replication to keep the DBs in sync. The servers are part of a Windows cluster (needed for Always On) and we use a VM to give a third server in the witness role. This is hosted on a production Hyper-V cloud. We have a number of availability groups on this platform, basically one per service we run. This allows us to split the read/write load between the two servers (unless they have failed over to a single box). If we had only one availability group for all the DBs  one node would being all the read/write and the other read only, so not that balanced.

SCVMM runs on a physical server with a pair of hardware-mirrored 2Tb disks for 2Tb of storage. That’s split into two partitions, as you can’t use data de-duplication on the OS volume of Windows. This allows us to have something like 5Tb of Lab VM images stored on the SCVMM library share that’s hosted on the SCVMM server. This share is for lab management use only.

We also have two physical servers that make up a Windows Cluster with a Cluster Shared Volume on an iSCSI SANs. This hosts a number of SCVMM libraries for ISO Images, Production VM Images and test stuff. Data de-duplication again is giving us an 80% space saving on the SAN (ISO images of OS’ and VHDs of installed OS’ dedupe _really_ well)

Our Lab cloud currently has three AMD based servers. They use the same disk setup as the SQL boxes, with a mirrored pair for OS and RAID5 for VM storage.

Our Production Hyper-V also has three servers, but this time in a Windows Cluster using a Cluster Shared Volume on our other iSCSI SAN for VM storage so it can do automated failover of VMs.

Each of the SQL servers, SCVMM servers and Lab Hyper-V servers uses Windows Server 2012 R2 NIC teaming to combine 2 x 1Gbit NICs which gives us better throughput and failover. The lab servers have one team for VM traffic and one team for the hyper-v management that is used when deploying VMs. That means we can push VMs around as fast as the disks will push data in either direction, pretty much, without needed expensive 10Gbit Ethernet.

So I hope that answers any questions.

What new in TFS from Teched 2014?

If you use TFS then it is well worth a look at Brian Harry’s Teched2014 session ‘Modern Application Lifecycle Management’. It goes through changes and new features with TFS both on-premise and in the cloud, including

Not all these features are in 2013.2 (which was released during the conference). However, in the session they said Visual Studio 2013.3CTP is going to be available next week, so not long to wait if you want a look at the latest features.

New release of TFS Alerts DSL that allows work item state rollup

A very common question I am asked at clients is

“Is it possible for a parent TFS work item to be automatically be set to ‘done’ when all the child work items are ‘done’?”.

The answer is not out the box, there is no work item state roll up in TFS.

However it is possible via the API. I have modified my TFS Alerts DSL CodePlex project to expose this functionality. I have added a couple of methods that allow you to find the parent and child of a work item, and hence create your own rollup script.

To make use of this all you need to do is create a TFS Alert that calls a SOAP end point where the Alerts DSL is installed. This end point should be called whenever a work item changes state. It will in turn run a Python script similar to the following to perform the roll-up

import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv[0] == "WorkItemEvent" :
    wi = GetWorkItem(int(sys.argv[1]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage("Work item '" + str(wi.Id) + "' has no parent")
    else:
        LogInfoMessage("Work item '" + str(wi.Id) + "' has parent '" + str(parentwi.Id) + "'")

        results = [c for c in GetChildWorkItems(parentwi) if c.State != "Done"]
        if len(results) == 0:
            LogInfoMessage("All child work items are 'Done'")
            parentwi.State = "Done"
            UpdateWorkItem(parentwi)
            msg = "Work item '" + str(parentwi.Id) + "' has been set as 'Done' as all its child work items are done"
            SendEmail("richard@typhoontfs","Work item '" + str(parentwi.Id) + "' has been updated", msg)
            LogInfoMessage(msg)
        else:
            LogInfoMessage("Not all child work items are 'Done'")
else:
    LogErrorMessage("Was not expecting to get here")
    LogErrorMessage(sys.argv)

So now there is a fairly easy way to create your own rollups, based on your own rules