But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Handling .pubxml files with TFS MSBuild arguments

With Visual Studio 2012 there were changes in the way Web Publishing worked; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.

Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the /p:DeployOnBuild=True MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.

This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.

You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish

image

To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument /p:DeployOnBuild=True, now we would use /p:DeployOnBuild=True;PublishProfile=MyProfile, where there is a .pubxml file in the path

[Project]/properties/PublishProfiles/MyProfile.pubxml

Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the [your package name].SetParameters.xml file, which you can alter with whichever transform technology you wish to use e.g. SlowCheetah or Release Management workflows.

One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <DesktopBuildPackageLocation>. This will be the output path you used when you created the publish profile using the wizard in Visual Studio.

If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument /p:DesktopBuildPackageLocation="". 

You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.

Fix for ‘Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid’ errors on TFS 2013.2 build

When working with web applications we tend to use MSDeploy for distribution. Our TFS build box, as well as producing a _PublishedWebsite copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager

To create this package we add the MSBuild Arguments /p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True 

image

This was been working fine, until I upgraded our TFS build system to 2013.2. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets (3883): Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid.)

If I removed the /p:DeployOnBuild=true argument, the build was fine, just no ZIP package was created.

After a bit of thought I realised that I had also upgraded my PC to 2013.2 RC, the publish options for a web project are more extensive, giving more options for Azure.

So I assumed the issue was a mismatch between MSBuild and target files, missing these new options. So I replaced the contents of C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web on my build box with the version from my upgraded development PC and my build started working again.

Seems there are some extra parameters set in the newer version of the build targets. Lets see if it changes again when Visual Studio 2013.2 RTMs.

A better way of using TFS Community Build Extensions StyleCop activity so it can use multiple rulesets

Background

The TFS Community Build Extensions provide many activities to enhance your build. One we use a lot is the one for StyleCop to enforce code consistency in projects as part of our check in & build process.

In most projects you will not want a single set of StyleCop rules to be applied across the whole solution. Most teams will require a higher level of ‘rule adherence’ for production code as opposed to unit test code. By this I don’t mean the test code is ‘lower quality’, just that rules will differ e.g. we don’t require XML documentation blocks on unit test methods as the unit test method names should be documentation enough.

This means each of our projects in a solution may have their own StyleCop settings file. With Visual Studio these are found and used by the StyleCop runner without an issue.

However, on our TFS build boxes what we found that when we told it to build a solution, the StyleCop settings file in the same folder as the solution file was used for the whole solution. This means we saw a lot of false violations, such as unit test with no documentation headers.

The workaround we have used to not tell the TFS build to build a solution, but to build each project individually (in the correct order). By doing this the StyleCop settings file in the project folder is picked up. This is an OK solution, but does mean you need to remember to add new projects and remove old ones as the solution matures. Easy to forget.

Why is it like this?

Because of this on our engineering backlog we have had a task to update the StyleCop task so it did not use the settings file from root solution/project folder (or any single named settings file you specified).

I eventually got around to this, mostly due to new solutions being starting that I knew would contain many projects and potentially had a more complex structure than I wanted to manage by hand within the  build process.

The issue is the in the activity a StyleCop console application object is created and run. This takes a single settings file and a list of .CS files as parameters. So if you want multiple settings files you need to create multiple StyleCop console application objects.

Not a problem I thought, nothing adding a couple of activity arguments and a foreach loop can’t fix. I even got as far as testing the logic in a unit test harness, far easier than debugging in a TFS build itself.

It was then I realised the real problem, it was the StyleCop build activity documentation, and I only have myself  to blame here as I wrote it!

The documentation suggests a way to use the activity

  1. Find the .sln or .csproj folder
  2. From this folder load a settings.stylecop file
  3. Find all the .CS files under this location
  4. Run StyleCop

It does not have to be this way, you don’t need to edit the StyleCop activity, just put in a different workflow

A better workflow?

The key is finding the setting files, not the solution or project files. So if we assume we are building a single solution we can use the following workflow

image

Using the base path of the .sln file do a recursive search for all *.stylecop files



Loop on this set of .stylecop files


For each one do a recursive search for .cs files under its location


Run StyleCop for this settings file against the source files below it.

 

This solution seems to work, you might get some files scanned twice if you have nested settings files, but that is not a issue for us as we place a StyleCop settings file with each projects. We alter the rules in each of these files as needed, from full sets to empty rulesets of we want StyleCop to skip the project.

So now I have it working internally it is now time go and update the TFS Community Build Extensions Documentation

Upgraded older Build and Test Controllers to TFS 2013

All has been going well since our upgrade from TFS 2012 to 2013, no nasty surprises.

As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.

So we created a new pair of Windows 2012R2 domain joined server VMs, on  one we installed a test controller and on the build controller and a single build agent

Note: I always tend to favour a single build agent per VM, usually using a single core VM. I tend to find most builds are IO locked not CPU locked so having more smaller VMs, I think, tends to be easier to manage at the VM hosting resource level.

Test Controller

Most of the use of our Test Controller is as part of our TFS Lab Management environments. If you load MTM 2013 you will see that it cannot manage 2012 Test Controller, they appear as off line. Lab Management is meant to keep the test agents upgraded, so it should upgrade an agent from one point release to another e.g. 2012.3 to 2012.4. However, this upgrade feature does not extend to 2012 to 2013 major release upgrades. Also I have always found the automated deployment/upgrade of test agents as part of an environment deployment is problematic at best. You often seem to suffer DNS and timeout issues. Easily the most reliable method is to make sure the correct (or at least compatible) test agents are installed on all the environment VMs prior to their configuration at the deployment/restart stage.

Given this the system that seems to work for getting environment’s test agents talking to the new 2013 Test Controller is:

  1. In MTM stop the environment
  2. Open the environment settings and change the controller to the new one
  3. Restart the environment, you will see the VMs show as not ready. The test agents won’t configure.
  4. Connect to each VM
    1. Uninstall the 2012 Test Agent
    2. install the 2013 Test Agent
  5. Stop and restart the environment and all should work – with luck the VMs will configure properly and show as ready
  6. If the don’t
    1. Try a second restart, sometimes sorts it
    2. You can try a repair, re-entering the various password.
    3. Updated 5 Feb 2014 Found I always need to do this. If problem really persist try running  Run the Test Agent Configuration tool on each VM, press next,next, next etc. and it will try to configure. It will probably fail, but hopefully it will have done enough port opening etc. to allow the next environment restart to work correctly
    4. If it still fails you need to check the logs, but suspect a DNS issue.

Obvious you could move step 4 to the start if you make the fair assumption it is going to need manual intervention

Build Controller

Swapping your build over from 2012 to 2013 will have site specific issues. It all depends what build activities you are using. If they are bound to TFS 2012 API they may not work unless you rebuild them. However from my first tests I have found my Build 2012 processes template seem to work I, this is whether I set my  build controller ‘custom assemblies path’ to either my 2012 DLL versions or their 2013 equivalents. So .NET is managing to resolve usable DLLs to get the build working.

Obviously there is still more to do here, checking all my custom build assemblies, maybe a look at revising the whole build scripts to make use to 2013 features, but that can wait.

What I have now allows me to upgrade our Windows 8.1 build agent VM so it can connect to our 2013 Build Controller. Thus allowing use to run full automated builds and tests of Windows 8.1 application. Up to now with TFS 2012 we had only been able to the basic build working due to having to hack i the build process as you need Visual Studio 2013 generation tools to fully build and test Windows 8.1 applications.

 

So we are going to have 2012 build and test controllers around for  while, but we have provided the migration is not going to be too bad. Maybe just needs a bit of thought over some custom build assemblies.

More on TF215106: Access denied from the TFS API after upgrade from 2012 to 2013

In my previous post I thought I had fixed my problems with TF215106 errors

"TF215106: Access denied. TYPHOONTFS\\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator."}

Turns out I had not, acutally I not idea why it worked for a while! There could well be an API version issue, but I had to actually also missed I needed to do what the error message said!

If you check MSDN, it tells you how to check the permissions for a given build; on checking I saw that the update build information permission was not set for the build in question.

image

Once I set it for the domain account my service was running as, everything worked as expected.

All I can assume that there is a change from TSF 2012 to 2013 over defaulting the permission as I have not needed to set it explicitly in the past

Links from my DDDNorth session ‘Automated Build Is Not The End Of The Story’

Thanks to everyone who came to my DDDNorth session ‘Automated Build Is Not The End Of The Story’, the links to the tools I discussed are as follows

Get rid of that that zombie build

Whilst upgrading a TFS 2010 server to 2012 I had a problem that a build was showing in the queue as active after the upgrade. This build was queued in January, 10 months ago, so should have finished a long long time ago. This build had the effect that it blocked any newly queued builds, but the old build did not appear to be running on any agent – a zombie build.

I tried to stop it, delete it, everything I could think of, all to no effect. It would not go away.

In the end I had to use the brute force solution to delete the rows in the TPC’s SQL DB for the build. I did this in both the tbl_BuildQueue (use the QueueID) and tbl_Build (use the buildID) tables.

Fix for - Could not load file or assembly 'Microsoft.VisualStudio.Shell’ on TFS 2010 Build Controller

I have previously posted about when TFS build controllers don’t start properly. Well I saw the same problem today whilst upgrading a TFS 2010 server to TFS 2012.3. The client did not want to immediately upgrade their build processes and decided to keep their 2010 build VMs just pointing them at the updated server (remember TFS 2012.2 and later servers can support either 2012 or 2010 build controllers).

The problem was that when we restarted the build service the controller and agents appeared to start, but then we got a burst of errors in the event log and we saw the controller say it was ready, but have the stopped icon.

On checking the Windows error log we saw the issue was it could not load the assembly

Exception Message: Problem with loading custom assemblies: Could not load file or assembly 'Microsoft.VisualStudio.Shell, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. (type Exception)

Turns out this was because the StyleCop.VSPackage.dll has been checked into the build controllers CustomAssemblies folder (how and why we never found out, also why it had not failed before was unclear as it was checked in about 6 weeks ago!). Anyway as soon as the DLL was removed from the custom assemblies folder, leaving just the StyleCop files from the c:\program Files (x86)|StyleCop\4.7 folder all was OK.

Where did that email go?

We use the TFS Alerts system to signal to our teams what state project build are at. So when a developer changes a build quality to ‘ready for test’ an email is sent to everyone in the team and we make sure the build retention policy is set to keep. Now this is not the standard behaviour of the TFS build alerts system, so we do all this by calling a SOAP based web service which in turn uses the TFS API.

This had all been working well until we did some tidying up and patching on our Exchange server. The new behaviour was:

  • Email sent directly via SMTP by the TFS Alert system  worked
  • Email sent via our web service called by the TFS Alert system disappeared, but no errors were shown

As far as we could see emails were leaving our web service (which was running as the same domain service account as TFS, but its own AppPool) and dying inside our email system, we presumed due to some spam filter rule?

After a bit of digging we spotted the real problem.

If you look at the advanced settings of the TFS Alerts email configuration it points out that if you don’t supply credentials for the SMTP server it passes those for the TFS Service process

image

Historically our internal SMTP server had allowed anonymous posting so this was not an issue, but in our tidy it now required authentication, so this setting became important.

We thought this should not be an issue as the TFS service account was correctly registered in Exchange, and it was working for the TFS generated alert emails, but on checking the code of the web service noticed a vital missing line, we were not setting the credentials on the message, we were leaving it as anonymous, so the email was being blocked

using (var msg = new MailMessage())
            {
                msg.To.Add(to);
                msg.From = new MailAddress(this.fromAddress);
                msg.Subject = subject;
                msg.IsBodyHtml = true;
                msg.Body = body;
                using (var client = new SmtpClient(this.smptServer))
                {
                    client.Credentials = CredentialCache.DefaultNetworkCredentials;
                    client.Send(msg);
                }

            }

Once this line was added and the web service redeployed it worked as expect again

Making the drops location for a TFS build match the assembly version number

A couple of years ago I wrote about using the TFSVersion build activity to try to sync the assembly and build number. I did not want to see build names/drop location in the format 'BuildCustomisation_20110927.17’, I wanted to see the version number in the build something like  'BuildCustomisation 4.5.269.17'. The problem as I outlined in that post was that by fiddling with the BuildNumberFormat you could easily cause an error where duplicated drop folder names were generated, such as

TF42064: The build number 'BuildCustomisation_20110927.17 (4.5.269.17)' already exists for build definition '\MSF Agile\BuildCustomisation'.

I had put this problem aside, thinking there was no way around the issue, until I was recently reviewing the new ALM Rangers ‘Test Infrastructure Guidance’. This had a solution to the problem included in the first hands on lab. The trick is that you need to use the TFSVersion community extension twice in you build.

  • You use it as normal to set the version of your assemblies after you have got the files into the build workspace, just as the wiki documentation shows
  • But you also call it in ‘get mode’ at the start of the build process prior to calling the ‘Update Build Number ‘ activity. The core issue being you cannot call ‘Update Build Number’ more than once else you tend to see the TF42064 issues. By using it in this manner you will set the BuildNumberFomat to the actual version number you want, which will be used for the drops folder and any assembly versioning.

So what do you need to do?

  1. Open you process template for editing (see the custom build activities documentation if you don’t know how to do this)
  2. Find the sequence ‘ Update Build Number for Triggered Builds’ and at the top of the process template

    image
    • Add TFSVersion activity – I called mine ‘Generate Version number for drop’
    • Add an Assign activity – I called mine ‘Set new BuildNumberFormat’
    • Add a WriteBuildMessage activity – This is option but I do like to see what it generated
  3. Add a string variable GeneratedBuildNumber with the scope of ‘Update Build Number for Triggered Builds’

    image
  4. The properties for the TFSVersion activity should be set as shown below

    image
    • The Action is the key setting, this needs to be set to GetVersion, we only need to generate a version number not set any file versions
    • You need to set the Major, Minor and StartDate settings to match the other copy of the activity in your build process. I good tip is to just cut and paste from the other instance to create this one, so that the bulk of the properties are correct
    • The Version needs to be set to you variable GeneratedBuildNumber this is the outputed version value
  5. The properties for the Assign activities are as follows

    image
    • Set To to BuildNumberFormat
    • Set Value to String.Format("$(BuildDefinitionName)_{0}", GeneratedBuildNumber), you can vary this format to meet your own needs [updated 31 Jul 13 - better to use an _ rarther than a space as this will be used in the drop path)
  6. I also added a WriteMessage activity that outputs the generated build value, but that is optional

Once all this was done and saved back to TFS it could be used for a build. You now see that the build name, and drops location is in the form

[Build name] [Major].[Minor].[Days since start date].[TFS build number]

image

This is a slight change from what I previously attempted where the 4th block was the count of builds of a given type on a day, now it is the unique TFS generate build number, the number shown before the build name is generated. I am happy with that. My key aim is reached that the drops location contains the product version number so it is easy to relate a build to a given version without digging into the build reports.