But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Getting the Typemock TFS build activities to work on a TFS build agent running in interactive mode

Windows 8 store applications need to be built on a TFS build agent running in interactive mode if you wish to run any tests. So whilst rebuilding all our build systems I decided to try to have all the agents running interactive. As we tend to run one agent per VM this was not going to be a major issue I thought.

However, whilst testing we found that any of our builds that use the Typemock build activities failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was

 

Exception Message: Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\TypeMock' is denied. (type UnauthorizedAccessException)
Exception Stack Trace:    at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str)
   at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions)
   at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck)
   at Configuration.RegistryAccess.CreateSubKey(RegistryKey reg, String subkey)
   at TypeMock.Configuration.IsolatorRegistryManager.CreateTypemockKey()
   at TypeMock.Deploy.AutoDeployTypeMock.Deploy(String rootDirectory)
   at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()
   at TypeMock.CLI.Common.TypeMockRegisterInfo..ctor()   at System.Activities.Statements.Throw.Execute(CodeActivityContext context)
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

 

So the issue was registry access. Irrespective of whether running interactive or as a service I used the same domain service account, which was a local admin on the build agent. The only thing that changed as the mode of running.

After some thought I focused on UAC being the problem, but disabling this did not seem to fix the issue. I was stuck or so I thought.

However, Robert Hancock unknown to me, was suffering a similar problem with a TFS build that included a post build event that was failing to xcopy a Biztalk custom functoid DLL to ‘Program Files’. He kept getting an ‘exit code 4 access denied’ error when the build agent was running interactive. Turns out the solution he found on Daniel Petri Blog also fixed my issues as they were both UAC/desktop interaction related.

The solution was to create a group policy for the build agent VMs that set the following

  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode - Set its value to Elevate without prompting.
  • User Account Control: Detect application installations and prompt for elevation - Set its value to Disabled.
  • User Account Control: Only elevate UIAccess applications that are installed in secure locations - Set its value to Disabled.
  • User Account Control: Run all administrators in Admin Approval Mode - Set its value to Disabled.

Once this GPO was pushed out to the build agent VMs and they were rebooted my Typemock based build and Robert Biztalk builds all worked as expected

AddBizTalkHiddenReferences error in TFS build when installing ProjectBuildComponent via a command line setup

I have been trying to script the installation of all the tools and SDKs we need on our TFS Build Agent VMs. This included BizTalk. A quick check on MSDN showed the setup command line parameter I need to install the build components was

 

/ADDLOCAL ProjectBuildComponent

So I ran this via my VMs setup PowerShell script, all appeared OK, but when I tried a build I got the error

 

C:\Program Files (x86)\MSBuild\Microsoft\BizTalk\BizTalkCommon.targets (189): The "AddBizTalkHiddenReferences" task failed unexpectedly.
System.ArgumentNullException: Value cannot be null.
Parameter name: path1
   at System.IO.Path.Combine(String path1, String path2)
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.InitializeHiddenReferences()
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.get_HiddenReferences()
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.GetHiddenReferencesNotAdded(IList`1 projectReferences)
   at Microsoft.VisualStudio.BizTalkProject.BuildTasks.AddBizTalkHiddenReferences.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__20.MoveNext()

The strange thing is, if I run the BizTalk installer via the UI and select just the ‘Project Build Components’ my build did not give this error.

On checking the Biztalk setup logs I saw that the UI based install does not run

 

/ADDLOCAL ProjectBuildComponent

but

 

/ADDLOCAL WMI,BizTalk,AdditionalApps,ProjectBuildComponent

Once this change was made to my PowerShell script the TFS build worked OK

Automating TFS Build Server deployment with SCVMM and PowerShell

Rik recently posted about the work we have done to automatically provision TFS build agent VMs. This has come out of us having about 10 build agents on our TFS server all doing different jobs, with different SDKs etc. When we needed to increase capacity for a given build type we had a problems, could another agent run the build? what exactly was on the agent anyway? An audit of the boxes made for horrible reading, there were very inconsistent.

So Rik automated the provision of new VMs and I looked at providing a PowerShell script to install the base tools we needed  on our build agents, knowing this list is going to change a good deal over time. After some thought, for our first attempt we picked

  • TFS itself (to provide the 2013.2 agent)
  • Visual Studio 2013.2 – you know you always end up installing it in the end to get SSDT, SDK and MSBuild targets etc.
  • WIX 3.8
  • Azure SDK 2.3 for Visual Studio 2013.2 – Virtually all our current projects need this. This is actually why we have had capacity issue on the old build agents as this was only installed on one.

Given this basic set of tools we can build probably 70-80% of our solutions. If we use this as the base for all build boxes we can then add extra tools if required manually, but we expect we will just end up adding to the list of items installed on all our build boxes, assuming the cost of installing the extra tools/SDKs is not too high. Also we will try to auto deploy tools as part of our build templates where possible, again reducing what needs to be placed on any given build agent.

Now the script I ended up with is a bit rough and ready but it does the job. I think in the future a move to DSC might help in this process, but I did not have time to write the custom resources now. I am assuming this script is going to be a constant work in progress as it is modified for new releases and tools.  I did make the effort to make all the steps check to see if they needed to be done, thus allowing the re-running of the script to ‘repair’ the build agent. All the writing to the event log is to make life easier for Rik when working out what is going on with the script, especially useful due to the installs from ISOs being a bit slow to run.

 

# make sure we have a working event logger with a suitable source
Create-EventLogSource -logname "Setup" -source "Create-TfsBuild"
write-eventlog -logname Setup -source Create-TfsBuild -eventID 6 -entrytype Information -message "Create-Tfsbuild started"

# add build service as local admin, not essential but make life easier for some projects
Add-LocalAdmin -domain "ourdomain" -user "Tfsbuilder"





# Install TFS, by mounting the ISO over the network and running the installer

# The command ‘& $isodrive + ":\tfs_server.exe" /quiet’ is run

# In the function use a while loop to see when the tfconfig.exe file appears and assume the installer is done – dirty but works

# allow me to use write-progress to give some indication the install is done.
Write-Output "Installing TFS server"
Add-Tfs "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_team_foundation_server_2013_with_update_2_x86_x64_dvd_4092433.iso"

Write-Output "Configuring TFS Build"
# clear out any old config – I found this helped avoid error when re-running script

# A System.Diagnostics.ProcessStartInfo object is used to run the tfsconfig command with the argument "setup /uninstall:All"

# ProcessStartInfo is used so we can capture the error output and log it to the event log if required
Unconfigure-Tfs


# and reconfigure, again using tfsconfig, this time with the argument "unattend /configure  /unattendfile:config.ini", where

# the config.ini has been created with tfsconfig unattend /create flag (check MSDN for the details)
Configure-Tfs "\\store\ApplicationInstallers\TFSBuild\configsbuild.ini"

# install vs2013, again by mounting the ISO running the installer, with a loop to check for a file appearing
Write-Output "Installing Visual Studio"
Add-VisualStudio "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_premium_2013_with_update_2_x86_dvd_4238022.iso"

# install wix by running the exe with the –q options via ProcessStartInfo again.
Write-Output "Installing Wix"
Add-Wix "\\store\ApplicationInstallers\wix\wix38.exe"

# install azure SDK using the Web Platform Installer, checking if the Web PI is present first and installing it if needed

# The Web PI installer lets you ask to reinstall a package, if it is it just ignores the request, so you don’t need to check if Azure is already installed
Write-Output "Installing Azure SDK"
Add-WebPIPackage "VWDOrVs2013AzurePack"

write-eventlog -logname Setup -source Create-TfsBuild -eventID 7 -entrytype Information -message "Create-Tfsbuild ended"
Write-Output "End of script"

 

So for a first pass this seems to work, I now need to make sure all our build can use this cut down build agent, if they can’t do I need to modify the build template? or do I need to add more tools to our standard install? or decide if it is going to need a special agent definition?

Once this is all done the hope is that when all the TFS build agents need patching for TFS 2013.x we will just redeploy new VMs or run a modified script to silently do the update. We shall see if this delivers on that promise

MSBuild targeting a project in a solution folder

Whilst working on an automated build where I needed to target a specific project I hit a problem. I would normally expect the MSBuild argument to be

/t:MyProject:Build

Where I want to build the project Myproject in my solution and perform the Build target (which is probably the default anyway).

However, my project was in a solution folder. The documentation says the you should be able to use for form

/t:TheSolutionFolder\MyProject:Build

but I kept getting the error the project did not exist.

Once I changed to

/t:TheSolutionFolder\MyProject

it worked, the default build target was run, which was OK as this was Build the one I wanted.

Not sure why occurred, maybe I should steer clear of solution folders?

Building Azure Cloud Applications on TFS

If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.

On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the $(build agent working folder)\source\myproject\bin\debug folder. Unlike the output of the build target which puts them in the $(build agent working folder)\binaries\ folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.

I have battled to sort this for a while, trying to avoid the need to edit our customised TFS build process template. This is something we try to avoid where possible, favouring environment variables and MSbuild arguments where we can get away with it. There is no point denying that editing build process templates is a pain point on TFS.

The solution – editing the process template

Turns out a colleague had fixed the same problem a few projects ago and the functionality was already hidden in our standard TFS build process template. The problem was it was not documented; a lesson for all of us, that it is a very good idea to put customisation information in a searchable location so others find customisations that are not immediate obvious. Frankly this is one of the main purposes of this blog, somewhere I can find what I did that years, as I won’t remember the details.

Anyway the key is to make sure the publish target for the MSBbuild uses the correct location to create the files. This is done using a pair of MSBuild arguments in the advanced section of the build configuration

  • /t:MyCloudApp:Publish -  this tells MSbuild to perform the publish action for just the project MyCloudApp. You might be able to just go /t:Publish if only one project in your solution has a Publish target
  • /p:PublishDir=$(OutDir) - this is the magic. We pass in the temporary variable $(OutDir). At this point we don’t know the target binary location as it is build agent/instance specific, customisation in the TFS build process template converts this temporary value to the correct path.

In the build process template in the Initialize Variable sequence within Run on Agent add a If Activity.

image

  • Set the condition to MSBuildArguments.Contains(“$(OutDir)”)
  • Within the true branch add an Assignment activity for the MSBuildArguments variable to MSBuildArguments.Replace(“$(OutDir)”, String.Format(“{0}\{1}\\”, BinariesDirectory, “Packages”))

This will swap the $(OutDir) for the correct TFS binaries location within that build.

After that it all just works as expected. The CSPKG file etc. ends up in the drops location.

Other things that did not work (prior to TFS 2013)

I had also looked a running a PowerShell script at the end of the build process or adding an AfterPublish target within the MSBuild process (by added it to the project file manually) that did a file copy. Both these methods suffered the problem that when the MSBuild command ran it did not know the location to drop the files into. Hence the need for the customisation above.

Now I should point out that though we are running TFS 2013 this project was targeting the TFS 2012 build tools, so I had to use the solution outlined above, a process template edit. However, if we had been using the TFS 2013 process template as our base for customisation then we would have had another way to get around the problem.

TFS 2013 exposes the current build settings as environment variables. This would allow us to use a AfterPublish MSBuild Target something like

<Target Name="CustomPostPublishActions" AfterTargets="AfterPublish" Condition="'$(TF_BUILD_DROPLOCATION)' != ''">
  <Exec Command="echo Post-PUBLISH event: Copying published files to: $(TF_BUILD_DROPLOCATION)" />
  <Exec Command="xcopy &quot;$(ProjectDir)bin\$(ConfigurationName)\app.publish&quot; &quot;$(TF_BUILD_DROPLOCATION)\app.publish&quot; /y " />
</Target>

So maybe a simpler option for the future?

The moral of the story document your customisations and let your whole team know they exist

Handling .pubxml files with TFS MSBuild arguments

With Visual Studio 2012 there were changes in the way Web Publishing worked; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.

Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the /p:DeployOnBuild=True MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.

This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.

You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish

image

To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument /p:DeployOnBuild=True, now we would use /p:DeployOnBuild=True;PublishProfile=MyProfile, where there is a .pubxml file in the path

[Project]/properties/PublishProfiles/MyProfile.pubxml

Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the [your package name].SetParameters.xml file, which you can alter with whichever transform technology you wish to use e.g. SlowCheetah or Release Management workflows.

One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <DesktopBuildPackageLocation>. This will be the output path you used when you created the publish profile using the wizard in Visual Studio.

If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument /p:DesktopBuildPackageLocation="". 

You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.

Fix for ‘Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid’ errors on TFS 2013.2 build

When working with web applications we tend to use MSDeploy for distribution. Our TFS build box, as well as producing a _PublishedWebsite copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager

To create this package we add the MSBuild Arguments /p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True 

image

This was been working fine, until I upgraded our TFS build system to 2013.2. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets (3883): Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid.)

If I removed the /p:DeployOnBuild=true argument, the build was fine, just no ZIP package was created.

After a bit of thought I realised that I had also upgraded my PC to 2013.2 RC, the publish options for a web project are more extensive, giving more options for Azure.

So I assumed the issue was a mismatch between MSBuild and target files, missing these new options. So I replaced the contents of C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web on my build box with the version from my upgraded development PC and my build started working again.

Seems there are some extra parameters set in the newer version of the build targets. Lets see if it changes again when Visual Studio 2013.2 RTMs.

A better way of using TFS Community Build Extensions StyleCop activity so it can use multiple rulesets

Background

The TFS Community Build Extensions provide many activities to enhance your build. One we use a lot is the one for StyleCop to enforce code consistency in projects as part of our check in & build process.

In most projects you will not want a single set of StyleCop rules to be applied across the whole solution. Most teams will require a higher level of ‘rule adherence’ for production code as opposed to unit test code. By this I don’t mean the test code is ‘lower quality’, just that rules will differ e.g. we don’t require XML documentation blocks on unit test methods as the unit test method names should be documentation enough.

This means each of our projects in a solution may have their own StyleCop settings file. With Visual Studio these are found and used by the StyleCop runner without an issue.

However, on our TFS build boxes what we found that when we told it to build a solution, the StyleCop settings file in the same folder as the solution file was used for the whole solution. This means we saw a lot of false violations, such as unit test with no documentation headers.

The workaround we have used to not tell the TFS build to build a solution, but to build each project individually (in the correct order). By doing this the StyleCop settings file in the project folder is picked up. This is an OK solution, but does mean you need to remember to add new projects and remove old ones as the solution matures. Easy to forget.

Why is it like this?

Because of this on our engineering backlog we have had a task to update the StyleCop task so it did not use the settings file from root solution/project folder (or any single named settings file you specified).

I eventually got around to this, mostly due to new solutions being starting that I knew would contain many projects and potentially had a more complex structure than I wanted to manage by hand within the  build process.

The issue is the in the activity a StyleCop console application object is created and run. This takes a single settings file and a list of .CS files as parameters. So if you want multiple settings files you need to create multiple StyleCop console application objects.

Not a problem I thought, nothing adding a couple of activity arguments and a foreach loop can’t fix. I even got as far as testing the logic in a unit test harness, far easier than debugging in a TFS build itself.

It was then I realised the real problem, it was the StyleCop build activity documentation, and I only have myself  to blame here as I wrote it!

The documentation suggests a way to use the activity

  1. Find the .sln or .csproj folder
  2. From this folder load a settings.stylecop file
  3. Find all the .CS files under this location
  4. Run StyleCop

It does not have to be this way, you don’t need to edit the StyleCop activity, just put in a different workflow

A better workflow?

The key is finding the setting files, not the solution or project files. So if we assume we are building a single solution we can use the following workflow

image

Using the base path of the .sln file do a recursive search for all *.stylecop files



Loop on this set of .stylecop files


For each one do a recursive search for .cs files under its location


Run StyleCop for this settings file against the source files below it.

 

This solution seems to work, you might get some files scanned twice if you have nested settings files, but that is not a issue for us as we place a StyleCop settings file with each projects. We alter the rules in each of these files as needed, from full sets to empty rulesets of we want StyleCop to skip the project.

So now I have it working internally it is now time go and update the TFS Community Build Extensions Documentation

Upgraded older Build and Test Controllers to TFS 2013

All has been going well since our upgrade from TFS 2012 to 2013, no nasty surprises.

As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.

So we created a new pair of Windows 2012R2 domain joined server VMs, on  one we installed a test controller and on the build controller and a single build agent

Note: I always tend to favour a single build agent per VM, usually using a single core VM. I tend to find most builds are IO locked not CPU locked so having more smaller VMs, I think, tends to be easier to manage at the VM hosting resource level.

Test Controller

Most of the use of our Test Controller is as part of our TFS Lab Management environments. If you load MTM 2013 you will see that it cannot manage 2012 Test Controller, they appear as off line. Lab Management is meant to keep the test agents upgraded, so it should upgrade an agent from one point release to another e.g. 2012.3 to 2012.4. However, this upgrade feature does not extend to 2012 to 2013 major release upgrades. Also I have always found the automated deployment/upgrade of test agents as part of an environment deployment is problematic at best. You often seem to suffer DNS and timeout issues. Easily the most reliable method is to make sure the correct (or at least compatible) test agents are installed on all the environment VMs prior to their configuration at the deployment/restart stage.

Given this the system that seems to work for getting environment’s test agents talking to the new 2013 Test Controller is:

  1. In MTM stop the environment
  2. Open the environment settings and change the controller to the new one
  3. Restart the environment, you will see the VMs show as not ready. The test agents won’t configure.
  4. Connect to each VM
    1. Uninstall the 2012 Test Agent
    2. install the 2013 Test Agent
  5. Stop and restart the environment and all should work – with luck the VMs will configure properly and show as ready
  6. If the don’t
    1. Try a second restart, sometimes sorts it
    2. You can try a repair, re-entering the various password.
    3. Updated 5 Feb 2014 Found I always need to do this. If problem really persist try running  Run the Test Agent Configuration tool on each VM, press next,next, next etc. and it will try to configure. It will probably fail, but hopefully it will have done enough port opening etc. to allow the next environment restart to work correctly
    4. If it still fails you need to check the logs, but suspect a DNS issue.

Obvious you could move step 4 to the start if you make the fair assumption it is going to need manual intervention

Build Controller

Swapping your build over from 2012 to 2013 will have site specific issues. It all depends what build activities you are using. If they are bound to TFS 2012 API they may not work unless you rebuild them. However from my first tests I have found my Build 2012 processes template seem to work I, this is whether I set my  build controller ‘custom assemblies path’ to either my 2012 DLL versions or their 2013 equivalents. So .NET is managing to resolve usable DLLs to get the build working.

Obviously there is still more to do here, checking all my custom build assemblies, maybe a look at revising the whole build scripts to make use to 2013 features, but that can wait.

What I have now allows me to upgrade our Windows 8.1 build agent VM so it can connect to our 2013 Build Controller. Thus allowing use to run full automated builds and tests of Windows 8.1 application. Up to now with TFS 2012 we had only been able to the basic build working due to having to hack i the build process as you need Visual Studio 2013 generation tools to fully build and test Windows 8.1 applications.

 

So we are going to have 2012 build and test controllers around for  while, but we have provided the migration is not going to be too bad. Maybe just needs a bit of thought over some custom build assemblies.

More on TF215106: Access denied from the TFS API after upgrade from 2012 to 2013

In my previous post I thought I had fixed my problems with TF215106 errors

"TF215106: Access denied. TYPHOONTFS\\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator."}

Turns out I had not, acutally I not idea why it worked for a while! There could well be an API version issue, but I had to actually also missed I needed to do what the error message said!

If you check MSDN, it tells you how to check the permissions for a given build; on checking I saw that the update build information permission was not set for the build in question.

image

Once I set it for the domain account my service was running as, everything worked as expected.

All I can assume that there is a change from TSF 2012 to 2013 over defaulting the permission as I have not needed to set it explicitly in the past