But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

“Communication with the deployer was lost during deployment” error with Release Management

Whilst developing  a new Release Management pipeline I did hit a problem that a component that published MSDeploy packages to Azure started to fail.  It had been working then suddenly I started seeing ‘communication with the deployer was lost during deployment’ messages as shown below.

image_thumb[1]

 

No errors were shown in any logs I could find, and no files appeared on the deployment target (you would expect the files/scripts to be deployed to appear on the machine running a RM Deployment client in the C:\users\[account]\local\temp\RM folder structure).

Rebooting of the Release Management server and deployment client had no effect.

The client in this case was a VM we use to do remote deployments e.g to Azure, SQL clusters etc. Place where we cannot install the RM Deployment Client. So it is used for a number of other pipelines, these were all working, so I doubted it was a communications issue.

In the end, out of frustration, I tried re-adding the component to the workflow and re-entering the parameters. Once this was done, and the old component instances delete, it all leapt into life.

I am not sure why I had the problems, I was trying to remember what I did exactly between the working and failing releases. All I can think is that I may have changed the password parameter to encrypted (I had forgotten to do this at first). Now I should have tried this sooner as I have posted on this error message before. All I can assume is that changing this parameter setting corrupted my component.

I should have read my own blog sooner

Publishing more than one Azure Cloud Service as part of a TFS build

Using the process in my previous post you can get a TFS build to create the .CSCFG and .CSPKG files needed to publish a Cloud Service. However, you hit a problem if your solution contains more that one Cloud Service project; as opposed to a single cloud service project with multiple roles, which is not a problem.

The method outlined in the previous post drops the two files into a Packages folder under the drops location. The .CSPKG files are fine, as they have unique names. However there is only one ServiceConfiguration.cscfg, whichever one was created last.

Looking in the cloud service projects I could find no way to rename the ServiceConfiguration file. It looks like it is like a app.config or web.config file i.e. it’s name is hard coded.

The only solution I could find was to add a custom target that is set to run after the publish target. This was added to the end of each .CCPROJ files using a text editor just before the closing </project>

 <Target Name="CustomPostPublishActions" AfterTargets="Publish">
    <Exec Command="IF '$(BuildingInsideVisualStudio)'=='true' exit 0
    echo Post-PUBLISH event: Active configuration is: $(ConfigurationName) renaming the .cscfg file to avoid name clashes
    echo Renaming the .CSCFG file to match the project name $(ProjectName).cscfg
    ren $(OutDir)Packages\ServiceConfiguration.*.cscfg $(ProjectName).cscfg
    " />
  </Target>
   <PropertyGroup>
    <PostBuildEvent>echo NOTE: This project has a post publish event</PostBuildEvent>
  </PropertyGroup>

 

Using this I now get unique name for the .CSCFG files as well as for .CSPKG files in my drops location. All ready for Release Management to pickup

Notes:

  • I echo out a message in the post build event too just as a reminder that I have added a custom target that cannot be seen in Visual Studio, so is hard to discover
  • I use an if test to make sure the commands are only run on the TFS build box, not on a local build. The main reason for this is the path names are different for local builds as opposed to TFS build. If you do want a rename on a local build you need to change the $(OutDir)Packages path to $(OutDir)app.publish. However, it seemed more sensible to leave the default behaviour occur when running locally

Getting the correct path and name for a project to pass as an MSBuild argument in TFS Build

I have been sorting out some builds for use with Release Management that include Azure Cloud Solutions. To get the correct packages built by TFS I have followed the process in my past blog post. The problem was I kept getting the build error

The target "Azure Packages\BlackMarble.Win8AppBuilder.AzureApi" does not exist in the project.

The issue was I could not get the solution folder/project name right for the MSBUILD target parameter. Was it the spaces in the folder? I just did not know.

The solution was to check the .PROJ file that was actually being run by MSBUILD. As you may know a .SLN file is not in MSBUILD format so you can’t just open it in notepad and look (unlike a .CSPROJ or .VBPROJ files), it is created by MSBUILD on the fly. To see this generated code, at a developer’s command prompt, run the following commands

cd c:\mysolutionroot
Set MSBuildEmitSolution=1
msbuild

When the MSBUILD command is run, whether the build works or not, there should be mysolution.sln.metaproj  file created. If you look in this file you will see the actual targets MSBUILD thinks it is dealing with.

In my case I could see

<Target Name="Azure Packages\BlackMarble_Win8AppBuilder_AzureApi:Publish">

So the first issue was my . were replaced by _

I changed my MSBUILD target argument to that shown in the file, but still had a problem. However, once I changed by space in the solution folder to %20 all was OK. So my final MSBUILD argument was

/t:Azure%20Packages\BlackMarble_Win8AppBuilder_AzureApi:Publish

image

Deploying a Windows service with Release Management

 recently needed to deploy a Windows service as part of a Release Management pipeline. In the past, our internal systems I have only need to deploy DB (via SSDT Dacpacs) and Websites (via MSDeploy), so a new experience.

WIX Contents

The first step to to create an MSI installer for the service. This was done using WIX, with all the fun that usually entails. The key part was a component to do the actual registration and starting of the service

<Component Id ="ModuleHostInstall" Guid="{3DF13451-6A04-4B62-AFCB-731A572C12C9}" Win64="yes">
   <CreateFolder />
   <Util:User Id="ModuleHostServiceUser" CreateUser="no" Name="[SERVICEUSER]" Password="[PASSWORD]" LogonAsService="yes" />
   <File Id="CandyModuleHostService" Name ="DataFeed.ModuleHost.exe" Source="$(var.ModuleHost.TargetDir)\ModuleHost.exe" KeyPath="yes" Vital="yes"/>
   <ServiceInstall Id="CandyModuleHostService" Name ="ModuleHost" DisplayName="Candy Module Host" Start="auto" ErrorControl="normal" Type="ownProcess"  Account="[SERVICEUSER]" Password="[PASSWORD]" Description="Manages the deployment of Candy modules" />
   <ServiceControl Id="CandyModuleHostServiceControl" Name="ModuleHost" Start="install" Stop="both" Wait="yes" Remove="uninstall"/>

So nothing that special here, but worth remembering if you miss out the ServiceControl block the service will not automatically start or be uninstalled with the MSI’s uninstall

You can see that we pass in the service account to be used to run the service as a property. This is an important technique for using WIX with Release Management, you will want to be able to pass in anything you may want to change as installation time as a parameter. This means we ended up with a good few properties such as

  <Property Id="DBSERVER" Value=".\sqlexpress" />
  <Property Id="DBNAME" Value ="=CandyDB" />
  <Property Id="SERVICEUSER" Value="Domain\serviceuser" />
  <Property Id="PASSWORD" Value="Password1" />

These tended to equate to app.config settings. In all cases I tried to set sensible default values so in most cases I could avoid passing in an override value.

These property values were then used to re-write the app.config file after the copying of the files from the MSI onto the target server. This was done using the XMLFile tools and some XPath e.g.

<Util:XmlFile Id="CacheDatabaseName" 
Action="setValue"
Permanent="yes"
File="[#ModuleHost.exe.config]"
ElementPath="/configuration/applicationSettings/DataFeed.Properties.Settings/setting[\[]@name='CacheDatabaseName'[\]]/value" Value="[CACHEDATABASENAME]" Sequence="1" />
 

Command Line Testing

Once the MSI was built it could be tested from the command line using the form

msiexec /i Installer.msi /Lv msi.log SERVICEUSER="domain\svc_acc" PASSWORD="Password1" DBSERVER="dbserver" DBSERVER="myDB" …..

I soon spotted a problem. As I was equating properties with app.config settings I was passing in connections strings and URLs, so the command line got long very quickly. It was really unwieldy to handle

A check of the log file I was creating, msi.log, showed the command line seemed to be truncated. This seemed to occur around 1000 characters. I am not sure if this was an artefact of the logging or the command line, but either way a good reason to try to shorten the property list.

I  therefore decided that I would not pass in whole connection strings, but just the properties that might change, especially effective for connection strings to things such as Entity Framework. This meant I did some string building in WIX during the transformation of the app.config file e.g.

<Util:XmlFile Id='CandyManagementEntities1'
   Action='setValue'
   ElementPath='/configuration/connectionStrings/add[\[]@name="MyManagementEntities"[\]]/@connectionString'
   File='[#ModuleHost.exe.config]' Value='metadata=res://*/MyEntities.csdl|res://*/MyEntities.ssdl|res://*/MyEntities.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=[DBSERVER];initial catalog=[DBNAME];integrated security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;' />

This technique had another couple of advantages

  • It meant I did not need to worry over spaces in strings, I could therefore lose the “ in the command line – Turns out this is really important later.
  • As I was passing in just a ‘secret value’ as opposed to a whole URL I could use the encryption features of Release Management to hide certain values

It is at this point I was delayed for a long time. You have to be really careful when installing Windows services via an MSI that your service can actually start. If it cannot then you will get errors saying "… could not be installed. Verify that you have sufficient privileges to install system services". This is probably not really a rights issue, just that some configuration setting is wrong so the service has failed to start. In my case it was down to an incorrect connection string, stray commas and quotes, and a missing DLL that should have been in the installer. You often end up working fairly blind at this point as Windows services don’t give too much information when they fail to load. Persistence, SysInternals Tools and comparing to the settings/files on a working development PC are the best options

Release Management Component

Once I had working command line I could create a component in Release Management. On the Configure Apps > Components page I already had a MDI Deployer, but this did not expose any properties. I therefore copied this component to create a MSI deployer specific to my new service installer and started to edit it.

All the edits were on the deployment tab, adding the extra properties that could be configured.

image

Note: Now it might be possible to do something with the pre/post deployment configuration variables as we do with MSDeploy, allowing the MSI to run then editing the app.config later. However, given that MSI service installers tends to fail they cannot start the new service I think passing in the correct properties into MSIEXEC is a better option. Also means it is consistent for anyone using the MSI via the command line.

On the Deployment tab I changed the Arguments to

-File ./msiexec.ps1 -MsiFileName "__Installer__"  -MsiCustomArgs ‘SERVICEUSER=”__SERVICEUSER__”  PASSWORD=”__PASSWORD__” DBSERVER=”__DBSERVER__”  DBNAME=”__DBNAME__” …. ’

I had initially assumed I needed the quotes around property values. Turns out I didn’t, and due to the way Release Management runs the component they made matters much, much worse. MSIEXEC kept failing instantly. if I ran the command line by hand on the target machine it was actually showing the Help dialog, so I knew the command line was invalid.

Turns out the issue is Release Management calls PowerShell.EXE to run the script passing in the Arguments. This in turn calls a PowerShell Script which does some argument processing before running a process to run MSIEXEC.EXE with some parameters. You can see there are loads of places where the escaping and quotes around parameters could get confused.

After much fiddling, swapping ‘ for “ I realised I could just forget most of the quotes. I had already edited my WIX package to build complex strings, so the actual values were simple with no spaces. Hence my command line became

-File ./msiexec.ps1 -MsiFileName "__Installer__"  -MsiCustomArgs “SERVICEUSER=__SERVICEUSER__  PASSWORD=__PASSWORD__ DBSERVER=__DBSERVER__  DBNAME=__DBNAME__ …. “

Once this was set my release pipeline worked resulting in a system with DBs, web services and window service all up and running.

As is often the case it took a while to get this first MSI running, but I am sure the next one will be much easier.

Getting ‘… is not a valid URL’ when using Git TF Clone

I have been attempting to use the Git TF technique to migrate some content between TFS servers. I needed to move a folder structure that contains spaces in folder names from a TPC that also contains spaces in its name. So I thought my command line would be

git tf clone “http://tfsserver1:8080/tfs/My Tpc” “$/My Folder”’ oldrepo --deep

But this gave the error

git-tf: “http://tfsserver1:8080/tfs/My Tpc” is not a valid URL

At first I suspected it was the quotes I was using, as I had had problems here before, but swapping from ‘ to “ made no difference.

The answer was to use the ASCII code %20 for the space, so this version of the command worked

git tf clone http://tfsserver1:8080/tfs/My%20Tpc “$/My Folder”’ oldrepo --deep

Interestingly you don’t need to use %20 for the folder name

Build failing post TFS 2013.3 upgrade with ‘Stack empty. (type InvalidOperationException)’

Just started seeing build error on a build that was working until we upgraded the build agent to TFS 2013.3

Exception Message: Stack empty. (type InvalidOperationException)
Exception Stack Trace:    at Microsoft.VisualStudio.TestImpact.Analysis.LanguageSignatureParser.NotifyEndType()
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseType()
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseRetType()
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseMethod(Byte num1)
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.Parse(Byte* blob, UInt32 len)
   at Microsoft.VisualStudio.TestImpact.Analysis.LanguageSignatureParser.ParseMethodName(MethodProps methodProps, String& typeName, String& fullName)
   at Microsoft.VisualStudio.TestImpact.Analysis.AssemblyMethodComparer.AddChangeToList(DateTime now, List`1 changes, CodeChangeReason reason, MethodInfo methodInfo, MetadataReader metadataReader, Guid assemblyIdentifier, SymbolReader symbolsReader, UInt32 sourceToken, LanguageSignatureParser& languageParser)
   at Microsoft.VisualStudio.TestImpact.Analysis.AssemblyMethodComparer.CompareAssemblies(String firstPath, String secondPath, Boolean lookupSourceFiles)
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.CompareBinary(CodeActivityContext context, String sharePath, String assembly, IList`1 codeChanges)
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.CompareBuildBinaries(CodeActivityContext context, IBuildDefinition definition, IList`1 codeChanges)
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.Execute(CodeActivityContext context)
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

I assume the issue is a DLL mismatch between what is installed in as part of the build agent and something in the 2012 generation build process template in use.

As an immediate fix, until I get a chance to swap the template to a newer one, was to disable Test Impact Analysis, which I was not using for this project anyway.

image

Once I did this my build completed OK with the tests ran OK

Listing all the PBIs that have no acceptance criteria

Update 24 Aug 2014:  Changed the PowerShell to use a pipe based filter as opposed to nested foreach loops

The TFS Scrum process template’s Product Backlog Item work item type has an acceptance criteria field. It is good practice to make sure any PBI has this field completed; however it is not always possible to enter this content when the work item is initially create i.e. before it is approved. We oftan find we add a PBI that is basically a title and add the summary and acceptance criteria as the product is planned.

It would be really nice to have a TFS work item query that listed all the PBIs that did not have the acceptance criteria field complete. Unfortunately there is not way to check a rich text or html field is empty in TFS queries It has been requested via UserVoice, but there is no sign of it appearing in the near future.

So we are left the TFS API to save the day, the following PowerShell function does the job, returning a list of non-completed PBI work items that have empty Acceptance Criteria.

 

# Load the one we have to find, might be more than we truly need for this single function
# but I usually keep all these functions in a single module so share the references
$ReferenceDllLocation = "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0\"
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.Client.dll" -ErrorAction Stop -Verbose
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.Common.dll" -ErrorAction Stop -Verbose
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.WorkItemTracking.Client.dll"  -ErrorAction Stop –Verbose


 

function Get-TfsPBIWIthNoAcceptanceCriteria { <# .SYNOPSIS This function get the list of PBI work items that have no acceptance criteria .DESCRIPTION This function allows a check to be made that all PBIs have a set of acceptance criteria .PARAMETER CollectionUri TFS Collection URI .PARAMETER TeamProject Team Project Name .EXAMPLE Get-TfsPBIWIthNoAcceptanceCriteria -CollectionUri "http://server1:8080/tfs/defaultcollection" -TeamProject "My Project" #> Param ( [Parameter(Mandatory=$true)] [uri] $CollectionUri , [Parameter(Mandatory=$true)] [string] $TeamProject ) # get the source TPC $teamProjectCollection = New-Object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection($CollectionUri) try { $teamProjectCollection.EnsureAuthenticated() } catch { Write-Error "Error occurred trying to connect to project collection: $_ " exit 1 } #Get the work item store $wiService = $teamProjectCollection.GetService([Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore]) # find each work item, we can't check for acceptance crieria state in the query $pbi = $wiService.Query("SELECT [System.Id] FROM WorkItems WHERE [System.TeamProject] = '{0}' AND [System.WorkItemType] = 'Product Backlog Item' AND [System.State] <> 'Done' ORDER BY [System.Id]" -f $teamproject) $pbi |  where-Object { $_.fields | where-object {$_.ReferenceName -eq 'Microsoft.VSTS.Common.AcceptanceCriteria' -and $_.Value -eq ""}} # Using a single piped line to filter the wi # this is equivalent to the following nested loops for those who like a more winded structure # $results = @() # foreach ($wi in $pbi) # { # foreach ($field in $wi.Fields) # { # if ($field.ReferenceName -eq 'Microsoft.VSTS.Common.AcceptanceCriteria' -and $field.Value -eq "") # { # $results += $wi # } # } # } # $results }

Why is my TFS report not failing when I really think it should ?

Whilst creating some custom reports for a client we hit a problem that though the reports worked on my development system and their old TFS server it failed on their new one. The error being that the Microsoft_VSTS_Scheduling_CompletedWork was an invalid column name

image

Initially I suspected the problem was a warehouse reprocessing issue, but other reports worked so it could not have been that.

It must really be the column is missing, and that sort of makes sense. On the new server the team was using the Scrum process template, the Microsoft_VSTS_Scheduling_CompletedWork  and Microsoft_VSTS_Scheduling_OriginalEstimate fields are not included in this template, the plan had been to add them to allow some analysis of estimate accuracy. This had been done on my development system, but not on the client new server. Once these fields were added to the Task work item the report leapt into life.

The question is then, why did this work on the old TFS server? The team project on the old server being used to test the reports also did not have the customisation either. However, remember the OLAP cube for the TFS warehouse is shared between ALL team projects on a server, so as one of these other team projects was using the MSF Agile template the fields are present, hence the report worked.

Remember that shared OLAP cube, it can trip you up over and over again

Getting the Typemock TFS build activities to work on a TFS build agent running in interactive mode

Windows 8 store applications need to be built on a TFS build agent running in interactive mode if you wish to run any tests. So whilst rebuilding all our build systems I decided to try to have all the agents running interactive. As we tend to run one agent per VM this was not going to be a major issue I thought.

However, whilst testing we found that any of our builds that use the Typemock build activities failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was

 

Exception Message: Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\TypeMock' is denied. (type UnauthorizedAccessException)
Exception Stack Trace:    at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str)
   at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions)
   at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck)
   at Configuration.RegistryAccess.CreateSubKey(RegistryKey reg, String subkey)
   at TypeMock.Configuration.IsolatorRegistryManager.CreateTypemockKey()
   at TypeMock.Deploy.AutoDeployTypeMock.Deploy(String rootDirectory)
   at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()
   at TypeMock.CLI.Common.TypeMockRegisterInfo..ctor()   at System.Activities.Statements.Throw.Execute(CodeActivityContext context)
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

 

So the issue was registry access. Irrespective of whether running interactive or as a service I used the same domain service account, which was a local admin on the build agent. The only thing that changed as the mode of running.

After some thought I focused on UAC being the problem, but disabling this did not seem to fix the issue. I was stuck or so I thought.

However, Robert Hancock unknown to me, was suffering a similar problem with a TFS build that included a post build event that was failing to xcopy a Biztalk custom functoid DLL to ‘Program Files’. He kept getting an ‘exit code 4 access denied’ error when the build agent was running interactive. Turns out the solution he found on Daniel Petri Blog also fixed my issues as they were both UAC/desktop interaction related.

The solution was to create a group policy for the build agent VMs that set the following

  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode - Set its value to Elevate without prompting.
  • User Account Control: Detect application installations and prompt for elevation - Set its value to Disabled.
  • User Account Control: Only elevate UIAccess applications that are installed in secure locations - Set its value to Disabled.
  • User Account Control: Run all administrators in Admin Approval Mode - Set its value to Disabled.

Once this GPO was pushed out to the build agent VMs and they were rebooted my Typemock based build and Robert Biztalk builds all worked as expected