But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Listing all the PBIs that have no acceptance criteria

The TFS Scrum process template’s Product Backlog Item work item type has an acceptance criteria field. It is good practice to make sure any PBI has this field completed; however it is not always possible to enter this content when the work item is initially create i.e. before it is approved. We oftan find we add a PBI that is basically a title and add the summary and acceptance criteria as the product is planned.

It would be really nice to have a TFS work item query that listed all the PBIs that did not have the acceptance criteria field complete. Unfortunately there is not way to check a rich text or html field is empty in TFS queries It has been requested via UserVoice, but there is no sign of it appearing in the near future.

So we are left the TFS API to save the day, the following PowerShell function does the job, returning a list of non-completed PBI work items that have empty Acceptance Criteria.

 

# Load the one we have to find, might be more than we truely need for this single function
# but I usually keep all these functions in a single module so share the references
$ReferenceDllLocation = "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0\"
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.Client.dll" -ErrorAction Stop -Verbose
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.Common.dll" -ErrorAction Stop -Verbose
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.WorkItemTracking.Client.dll"  -ErrorAction Stop –Verbose


 

function Get-TfsPBIWIthNoAcceptanceCriteria { <# .SYNOPSIS This function get the list of PBI work items that have no acceptance criteria .DESCRIPTION This function allows a check to be made that all PBIs have a set of acceptance criteria .PARAMETER CollectionUri TFS Collection URI .PARAMETER TeamProject Team Project Name .EXAMPLE Get-TfsPBIWIthNoAcceptanceCriteria -CollectionUri "http://server1:8080/tfs/defaultcollection" -TeamProject "My Project" #> Param ( [Parameter(Mandatory=$true)] [uri] $CollectionUri , [Parameter(Mandatory=$true)] [string] $TeamProject ) # get the source TPC $teamProjectCollection = New-Object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection($CollectionUri) try { $teamProjectCollection.EnsureAuthenticated() } catch { Write-Error "Error occurred trying to connect to project collection: $_ " exit 1 } #Get the work item store $wiService = $teamProjectCollection.GetService([Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore]) # find each work item, we can't check for acceptance crieria state in the query $pbi = $wiService.Query("SELECT [System.Id] FROM WorkItems WHERE [System.TeamProject] = '{0}' AND [System.WorkItemType] = 'Product Backlog Item' AND [System.State] <> 'Done' ORDER BY [System.Id]" -f $teamproject) # Using a pair of nest loops as the pbi[].fields[] structure defeated me doing it all via piping, love to find out if can can be done in one line!
$results = @() foreach ($wi in $pbi) { foreach ($field in $wi.Fields) { if ($field.ReferenceName -eq 'Microsoft.VSTS.Common.AcceptanceCriteria' -and $field.Value -eq "") { $results += $wi } } } $results }

Why is my TFS report not failing when I really think it should ?

Whilst creating some custom reports for a client we hit a problem that though the reports worked on my development system and their old TFS server it failed on their new one. The error being that the Microsoft_VSTS_Scheduling_CompletedWork was an invalid column name

image

Initially I suspected the problem was a warehouse reprocessing issue, but other reports worked so it could not have been that.

It must really be the column is missing, and that sort of makes sense. On the new server the team was using the Scrum process template, the Microsoft_VSTS_Scheduling_CompletedWork  and Microsoft_VSTS_Scheduling_OriginalEstimate fields are not included in this template, the plan had been to add them to allow some analysis of estimate accuracy. This had been done on my development system, but not on the client new server. Once these fields were added to the Task work item the report leapt into life.

The question is then, why did this work on the old TFS server? The team project on the old server being used to test the reports also did not have the customisation either. However, remember the OLAP cube for the TFS warehouse is shared between ALL team projects on a server, so as one of these other team projects was using the MSF Agile template the fields are present, hence the report worked.

Remember that shared OLAP cube, it can trip you up over and over again

Where have my freeview tuners gone?

I have been a long time happy user of Windows Media Center since it’s XP days. My current systems is Windows 8.1 an ATOM based Acer Revo with a pair of  USB PCTV Nanostick T2 Freeview HD tuners. For media storage I used a USB attached StarTech RAID disk sub system. This has been working well for a good couple of years, sitting in a cupboard under that stairs. However, I am about to move house and all the kit is going to have to go under the TV. The Revo is virtually silent, but the RAID crate was going to be an issue. It sounds like and aircraft taking off as the disks spin up.

A change of kit was needed….

I decided the best option was to move to a NAS, thus allowing the potentially noisy disks to be anywhere in the house. So I purchased a Netgear ReadyNAS 104. It shows how price have dropped over the past few years as this was about half the price of my StarTech RAID, and holds well over twice as much and provide much more functionality. I wait to see if it is reliable, only time will tell!

So I popped the NAS on the LAN and started to copy over content from the RAID crate, at the same time (and this seems was the mistake) reconfiguring MCE to point at the NAS. All seemed OK, MCE reconfigured and background copies running, until I tried to watch live TV. MCE said it was trying to find a tuner, I waited. In the end I gave up and went to bed, assuming all would be OK in the morning when the media copy was finished and I could reboot the PC.

Unfortunately it was not, after a reboot it still said it could find no tuner. if I tried to rescan for TV channels it just hung (for well over 48 hours, I left it while I went away). All the other functions of MCE seemed fine. I tried removing the USB tuners, both physically and un-installing the drivers, it had not effect. I had corrupted the MCE DB it seemed, something I had done before looking back at older posts.

In the end I had to reset MCE as detailed on Ben Drawbaugh’s blog. Basically I deleted the contents of c:\programdata\microsoft\ehome  and reran the MCE Live TV setup wizard. I was not bothered over my channel list order, or series recording settings, so I did not bother with mcbackup for the backup and restore steps.

Once this was done the tuners both worked again, though the channel scan took a good hour.

Interestingly I had assume clearing out the ehome folder would mean I lost all my MCE settings including the media library settings, but I didn’t my MCE was still pointing at the new NAS shares so a small win.

One point I had not considered over the move to a NAS, is that MCE cannot record TV to a network shares. Previously I had written all media to the locally attached RAID crate. The solution was to let MCE save TV to the local C:, but use a scheduled job to run ROBOCOPY to move the files to the NAS over night. Can’t see why it shouldn’t work, again only time will tell.

Update:

Forgot to mention another advantage of moving to the NAS. Previously I had to use the Logitech media server to serve music to my old Roku 1000 unit connected to my even older HiFi, now the Roku can use the NAS directly, thus making the system setup far easier

Getting the Typemock TFS build activities to work on a TFS build agent running in interactive mode

Windows 8 store applications need to be built on a TFS build agent running in interactive mode if you wish to run any tests. So whilst rebuilding all our build systems I decided to try to have all the agents running interactive. As we tend to run one agent per VM this was not going to be a major issue I thought.

However, whilst testing we found that any of our builds that use the Typemock build activities failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was

 

Exception Message: Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\TypeMock' is denied. (type UnauthorizedAccessException)
Exception Stack Trace:    at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str)
   at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions)
   at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck)
   at Configuration.RegistryAccess.CreateSubKey(RegistryKey reg, String subkey)
   at TypeMock.Configuration.IsolatorRegistryManager.CreateTypemockKey()
   at TypeMock.Deploy.AutoDeployTypeMock.Deploy(String rootDirectory)
   at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()
   at TypeMock.CLI.Common.TypeMockRegisterInfo..ctor()   at System.Activities.Statements.Throw.Execute(CodeActivityContext context)
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

 

So the issue was registry access. Irrespective of whether running interactive or as a service I used the same domain service account, which was a local admin on the build agent. The only thing that changed as the mode of running.

After some thought I focused on UAC being the problem, but disabling this did not seem to fix the issue. I was stuck or so I thought.

However, Robert Hancock unknown to me, was suffering a similar problem with a TFS build that included a post build event that was failing to xcopy a Biztalk custom functoid DLL to ‘Program Files’. He kept getting an ‘exit code 4 access denied’ error when the build agent was running interactive. Turns out the solution he found on Daniel Petri Blog also fixed my issues as they were both UAC/desktop interaction related.

The solution was to create a group policy for the build agent VMs that set the following

  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode - Set its value to Elevate without prompting.
  • User Account Control: Detect application installations and prompt for elevation - Set its value to Disabled.
  • User Account Control: Only elevate UIAccess applications that are installed in secure locations - Set its value to Disabled.
  • User Account Control: Run all administrators in Admin Approval Mode - Set its value to Disabled.

Once this GPO was pushed out to the build agent VMs and they were rebooted my Typemock based build and Robert Biztalk builds all worked as expected

AddBizTalkHiddenReferences error in TFS build when installing ProjectBuildComponent via a command line setup

I have been trying to script the installation of all the tools and SDKs we need on our TFS Build Agent VMs. This included BizTalk. A quick check on MSDN showed the setup command line parameter I need to install the build components was

 

/ADDLOCAL ProjectBuildComponent

So I ran this via my VMs setup PowerShell script, all appeared OK, but when I tried a build I got the error

 

C:\Program Files (x86)\MSBuild\Microsoft\BizTalk\BizTalkCommon.targets (189): The "AddBizTalkHiddenReferences" task failed unexpectedly.
System.ArgumentNullException: Value cannot be null.
Parameter name: path1
   at System.IO.Path.Combine(String path1, String path2)
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.InitializeHiddenReferences()
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.get_HiddenReferences()
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.GetHiddenReferencesNotAdded(IList`1 projectReferences)
   at Microsoft.VisualStudio.BizTalkProject.BuildTasks.AddBizTalkHiddenReferences.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__20.MoveNext()

The strange thing is, if I run the BizTalk installer via the UI and select just the ‘Project Build Components’ my build did not give this error.

On checking the Biztalk setup logs I saw that the UI based install does not run

 

/ADDLOCAL ProjectBuildComponent

but

 

/ADDLOCAL WMI,BizTalk,AdditionalApps,ProjectBuildComponent

Once this change was made to my PowerShell script the TFS build worked OK

TFS 2013 wizard allows you to proceed to verification even if you have no SQL admin access

Had an interesting issue during and upgrade from TFS 2012 to 2013.2 today. The upgrade of the files proceeded as expect and the wizard ran. It picked up the correct Data Tier, found the tfs_configuration Db and I was able to fill in the service account details.

However, when I got to the reporting section it found the report server URLs, but when it tried to find the tfs_warehouse DB it seemed to lock up, though the test of the SQL instance on the same page worked OK.

In the end I used task manager to kill the config wizard.

I then re-ran the wizard, switching off the reporting. This time it got to the verification step, but seemed to hang again. After a very long wait it came back with an error that the account being using to do the upgrade did not have SysAdmin rights on the SQL instance.

On checking this turned out to be true, the user’s rights had been removed since the system was originally installed by a DBA. Once the rights were re-added the upgrade proceed perfectly; though interestingly the first page where you confirm the tfs_configuration DB now also had a check box about Always On, which it had not before.

So the strange things was not that it failed, I would expect that, but that any of the wizard worked at all. I would have expected a failure to even find the tfs_configuration DB at the start of the wizard. Not have to wait until the verification (or reporting) step

Why is the Team Project drop down in Release Management empty?

The problem

Today I found I had a problem when trying to associate a Release Management 2013.2 release pipeline with a TFS build. When I tried to select a team project the drop down for the release properties was empty.

image

The strange thing was this installation of Release Management has been working OK last week. What had changed?

I suspected an issue connecting to TFS, so in the Release Management Client’s ‘Managing TFS’ tab I tried to verify the  active TFS server linked to the Release Management. As soon as I tried this I got the following error that the TFS server was not available.

clip_image002

I switched the TFS URL to HTTP from HTTPS and retired the verification and it worked. Going back to my release properties I could now see the build definitions again in the drop down. So I knew I had an SSL issue.

The strange thing was we use SSL as out default connection, and none of our developers were complaining they could not connect via HTTPS.

However, on checking I found on some of our build VMs there was an issue. If on those VMs I tried to connect to TFS in a browser with an HTTPS URL you got a certificate chain error.

But stranger, on my PC, where I was running the Release Management client, I could access TFS over HTTPS from a browser and Visual Studio, but the Release Management verification failed.

The solution

It turns out the issue was we had an intermediate cert issue with our TFS server. An older Digicert intermediate certificate had expired over the weekend, and though the new cert was in place, and had been for a good few months since we renewed our wildcard cert, the active wildcard cert insisted on using the old version of the intermediate cert on some machines.

As an immediate fix we ended up having to delete the old intermediate cert manually on machines showing the error. Once this was done the HTTPS connect worked again.

Turns the real culprit was a group policy used to push out intermediate certs that are required to be trusted for some document automation we use. This old group policy was pushing the wrong version of the cert to some server VMs. Once this policy was update with the correct cert and pushed out it overwrote the problem cert and the problem went away.

One potentially confusing thing here is that the ‘verity the TFS link’ in Release Management verifies that the Release Management server can see the TFS server, not the PC running the Release Management client. It was on the Release Management server I had to delete the dead cert (run a gpupdate /force to get the new policy). Hence why I was confused by my own PC working for Visual Studio and not for Release Management

So I suspect the issue with drop down being empty is always going to really mean the Release Management server cannot see the TFS server for some reason, so check certs, permissions or basic network failure.

Automating TFS Build Server deployment with SCVMM and PowerShell

Rik recently posted about the work we have done to automatically provision TFS build agent VMs. This has come out of us having about 10 build agents on our TFS server all doing different jobs, with different SDKs etc. When we needed to increase capacity for a given build type we had a problems, could another agent run the build? what exactly was on the agent anyway? An audit of the boxes made for horrible reading, there were very inconsistent.

So Rik automated the provision of new VMs and I looked at providing a PowerShell script to install the base tools we needed  on our build agents, knowing this list is going to change a good deal over time. After some thought, for our first attempt we picked

  • TFS itself (to provide the 2013.2 agent)
  • Visual Studio 2013.2 – you know you always end up installing it in the end to get SSDT, SDK and MSBuild targets etc.
  • WIX 3.8
  • Azure SDK 2.3 for Visual Studio 2013.2 – Virtually all our current projects need this. This is actually why we have had capacity issue on the old build agents as this was only installed on one.

Given this basic set of tools we can build probably 70-80% of our solutions. If we use this as the base for all build boxes we can then add extra tools if required manually, but we expect we will just end up adding to the list of items installed on all our build boxes, assuming the cost of installing the extra tools/SDKs is not too high. Also we will try to auto deploy tools as part of our build templates where possible, again reducing what needs to be placed on any given build agent.

Now the script I ended up with is a bit rough and ready but it does the job. I think in the future a move to DSC might help in this process, but I did not have time to write the custom resources now. I am assuming this script is going to be a constant work in progress as it is modified for new releases and tools.  I did make the effort to make all the steps check to see if they needed to be done, thus allowing the re-running of the script to ‘repair’ the build agent. All the writing to the event log is to make life easier for Rik when working out what is going on with the script, especially useful due to the installs from ISOs being a bit slow to run.

 

# make sure we have a working event logger with a suitable source
Create-EventLogSource -logname "Setup" -source "Create-TfsBuild"
write-eventlog -logname Setup -source Create-TfsBuild -eventID 6 -entrytype Information -message "Create-Tfsbuild started"

# add build service as local admin, not essential but make life easier for some projects
Add-LocalAdmin -domain "ourdomain" -user "Tfsbuilder"





# Install TFS, by mounting the ISO over the network and running the installer

# The command ‘& $isodrive + ":\tfs_server.exe" /quiet’ is run

# In the function use a while loop to see when the tfconfig.exe file appears and assume the installer is done – dirty but works

# allow me to use write-progress to give some indication the install is done.
Write-Output "Installing TFS server"
Add-Tfs "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_team_foundation_server_2013_with_update_2_x86_x64_dvd_4092433.iso"

Write-Output "Configuring TFS Build"
# clear out any old config – I found this helped avoid error when re-running script

# A System.Diagnostics.ProcessStartInfo object is used to run the tfsconfig command with the argument "setup /uninstall:All"

# ProcessStartInfo is used so we can capture the error output and log it to the event log if required
Unconfigure-Tfs


# and reconfigure, again using tfsconfig, this time with the argument "unattend /configure  /unattendfile:config.ini", where

# the config.ini has been created with tfsconfig unattend /create flag (check MSDN for the details)
Configure-Tfs "\\store\ApplicationInstallers\TFSBuild\configsbuild.ini"

# install vs2013, again by mounting the ISO running the installer, with a loop to check for a file appearing
Write-Output "Installing Visual Studio"
Add-VisualStudio "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_premium_2013_with_update_2_x86_dvd_4238022.iso"

# install wix by running the exe with the –q options via ProcessStartInfo again.
Write-Output "Installing Wix"
Add-Wix "\\store\ApplicationInstallers\wix\wix38.exe"

# install azure SDK using the Web Platform Installer, checking if the Web PI is present first and installing it if needed

# The Web PI installer lets you ask to reinstall a package, if it is it just ignores the request, so you don’t need to check if Azure is already installed
Write-Output "Installing Azure SDK"
Add-WebPIPackage "VWDOrVs2013AzurePack"

write-eventlog -logname Setup -source Create-TfsBuild -eventID 7 -entrytype Information -message "Create-Tfsbuild ended"
Write-Output "End of script"

 

So for a first pass this seems to work, I now need to make sure all our build can use this cut down build agent, if they can’t do I need to modify the build template? or do I need to add more tools to our standard install? or decide if it is going to need a special agent definition?

Once this is all done the hope is that when all the TFS build agents need patching for TFS 2013.x we will just redeploy new VMs or run a modified script to silently do the update. We shall see if this delivers on that promise

Could not load file or assembly 'Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0’ when running a build on a new build agent on TFS 2013.2

I am currently rebuilding our TFS build infrastructure, we have too many build agents that are just too different, they don’t need to be. So I am looking at a standard set of features on a build agent and the ability to auto provision new instances to make scaling easier. More on this in a future post…

Anyway whilst testing a new agent I had a problem. A build that had worked on a previous test agent failed with the error

Could not load file or assembly 'Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

The log showed it was failing to even do a get latest of the files to build, or anything on the build agent.

image

 

Turns out the issue was the PowerShell script that installed all the TFS build components and SDKs had failed when trying to install the Azure SDK for VS2013, the Web Deploy Platform was not installed so when it tried to use the command line installer to add this package it failed.

I fixed the issues for the Web PI tools and re-ran the command line to installed the Azure SDK and all was OK.

Not sure why this happened, maybe a missing pre-req put on by Web PI itself was the issue. I know older versions did have a .NET 3.5 dependency. Once to keep an eye on

Building Azure Cloud Applications on TFS

If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.

On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the $(build agent working folder)\source\myproject\bin\debug folder. Unlike the output of the build target which puts them in the $(build agent working folder)\binaries\ folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.

I have battled to sort this for a while, trying to avoid the need to edit our customised TFS build process template. This is something we try to avoid where possible, favouring environment variables and MSbuild arguments where we can get away with it. There is no point denying that editing build process templates is a pain point on TFS.

The solution – editing the process template

Turns out a colleague had fixed the same problem a few projects ago and the functionality was already hidden in our standard TFS build process template. The problem was it was not documented; a lesson for all of us, that it is a very good idea to put customisation information in a searchable location so others find customisations that are not immediate obvious. Frankly this is one of the main purposes of this blog, somewhere I can find what I did that years, as I won’t remember the details.

Anyway the key is to make sure the publish target for the MSBbuild uses the correct location to create the files. This is done using a pair of MSBuild arguments in the advanced section of the build configuration

  • /t:MyCloudApp:Publish -  this tells MSbuild to perform the publish action for just the project MyCloudApp. You might be able to just go /t:Publish if only one project in your solution has a Publish target
  • /p:PublishDir=$(OutDir) - this is the magic. We pass in the temporary variable $(OutDir). At this point we don’t know the target binary location as it is build agent/instance specific, customisation in the TFS build process template converts this temporary value to the correct path.

In the build process template in the Initialize Variable sequence within Run on Agent add a If Activity.

image

  • Set the condition to MSBuildArguments.Contains(“$(OutDir)”)
  • Within the true branch add an Assignment activity for the MSBuildArguments variable to MSBuildArguments.Replace(“$(OutDir)”, String.Format(“{0}\{1}\\”, BinariesDirectory, “Packages”))

This will swap the $(OutDir) for the correct TFS binaries location within that build.

After that it all just works as expected. The CSPKG file etc. ends up in the drops location.

Other things that did not work (prior to TFS 2013)

I had also looked a running a PowerShell script at the end of the build process or adding an AfterPublish target within the MSBuild process (by added it to the project file manually) that did a file copy. Both these methods suffered the problem that when the MSBuild command ran it did not know the location to drop the files into. Hence the need for the customisation above.

Now I should point out that though we are running TFS 2013 this project was targeting the TFS 2012 build tools, so I had to use the solution outlined above, a process template edit. However, if we had been using the TFS 2013 process template as our base for customisation then we would have had another way to get around the problem.

TFS 2013 exposes the current build settings as environment variables. This would allow us to use a AfterPublish MSBuild Target something like

<Target Name="CustomPostPublishActions" AfterTargets="AfterPublish" Condition="'$(TF_BUILD_DROPLOCATION)' != ''">
  <Exec Command="echo Post-PUBLISH event: Copying published files to: $(TF_BUILD_DROPLOCATION)" />
  <Exec Command="xcopy &quot;$(ProjectDir)bin\$(ConfigurationName)\app.publish&quot; &quot;$(TF_BUILD_DROPLOCATION)\app.publish&quot; /y " />
</Target>

So maybe a simpler option for the future?

The moral of the story document your customisations and let your whole team know they exist