# But it works on my PC!

### The random thoughts of Richard Fennell on technology and software development

I like web deploy as a means to package up websites for deployment. I like the way I only need to add

/p:DeployOnBuild=True;PublishProfile=Release

as an MSBuild argument to get the package produced as part of an automated build. This opening up loads of deployment options

I recently hit an issue packaging up a solution that contained an Azure WebSite and an Azure Web Job (to be hosted in the web site). It is easy to add the web job so that it is included in the Web Deploy package. Once this was done we could deploy from Visual Studio, or package to the local file system and see the web job EXE in the app_data\jobs folder as expected.

The problems occurred when we tried to get TFS build to create the deployment package using the arguments shown above. I got the error

The value for PublishProfile is set to 'Release', expected to find the file at 'C:\vNextBuild\_work\4253ff91\BM\Src\MyWebJob\Properties\PublishProfiles\Release.pubxml' but it could not be found.

The issue is that there is a Publish target for the web jobs project type, but if run from Visual Studio it actually creates a ClickOnce package. This wizard provides no means create an MSDeploy style package.

MSBuild is getting confused as it expects there to be this MSDeploy style package definition for the web job projects, even though it won’t actually use it as the Web Job EXE will be copied into the web site deployment package.

The solution was to add a dummy PublishProfiles\Release.pubxml file into the properties folder of the web jobs project.

<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">  <PropertyGroup>    <WebPublishMethod>Package</WebPublishMethod>    <LastUsedBuildConfiguration>Release</LastUsedBuildConfiguration>    <LastUsedPlatform>Any CPU</LastUsedPlatform>    <SiteUrlToLaunchAfterPublish />    <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>    <ExcludeApp_Data>False</ExcludeApp_Data>    <DesktopBuildPackageLocation />    <PackageAsSingleFile>true</PackageAsSingleFile>    <DeployIisAppPath />    <PublishDatabaseSettings/>    </PropertyGroup></Project>

Note: I had to add this file to source control via the TFS Source Code Explorer as Visual Studio does not allow you add folders/files manually under the properties folder.

Once this file was added my automated build worked OK, and I got my web site package including the web job.

A couple of months ago I wrote a post on using PowerShell scripts to deploy web sites in Release Management vNext templates as opposed to DSC. In that post I provided a script to help with the translation of Release Management configuration variables to entries in the [MSDELPOY].setparameters.xml file for web sites.

The code I provided in that post required you to hard code the variables to translate. This quickly become a problem for maintenance. However, there is a simple solution.

If we use a naming convention for our RM configuration variables that map to web.config entries (I chose __NAME__ to be consistent to the old RM Agent based deployment standards) we can let PowerShell do the work.

So the revised script is

$VerbosePreference ='Continue' # equiv to -verbose function Update-ParametersFile{ param ($paramFilePath,        $paramsToReplace ) write-verbose "Updating parameters file '$paramFilePath'" -verbose    $content = get-content$paramFilePath    $paramsToReplace.GetEnumerator() | % { Write-Verbose "Replacing value for key '$($_.Name)'" -Verbose$content = $content.Replace($_.Name, $_.Value) } set-content -Path$paramFilePath -Value $content } # the script folder$folder = Split-Path -parent $MyInvocation.MyCommand.Definitionwrite-verbose "Deploying Website '$package' using script in '$folder'" # work out the variables to replace using a naming convention$parameters = Get-Variable -include "__*__" write-verbose "Discovered replacement parameters that match the convention '__*__': $($parameters | Out-string)" Update-ParametersFile -paramFilePath "$ApplicationPath\$packagePath\$package.SetParameters.xml" -paramsToReplace$parameters
write-verbose "Calling '$ApplicationPath\$packagePath\$package.deploy.cmd'" & "$ApplicationPath\$packagePath\$package.deploy.cmd" /Y  /m:"$PublishUrl" -allowUntrusted /u:"$PublishUser" /p:"$PublishPassword" /a:Basic | Write-Verbose  Note: This script allow the deployment to a remote IIS server, so useful for Azure Web Sites. If you are running it locally on an IIS server just trim everything after the /Y on the last line So now I provide •$PackagePath – path to our deployment on the deployment VM(relative to the $ApplicationPath local working folder) •$Package – name of the MSdeploy package
• The publish settings you can get from the Azure Portal
• $__PARAM1__ – a value to swap in the web.config •$__PARAM2__ –  another value to swap in the web.config

In RM it will look like this.

So now you can use a single script for all your web deployments.

DDDNorth is on again this year, back in it’s more northern base of the Sunderland University on the 24th of October

You can submit your session proposal in here

### Background

We  upgraded our production TFS 2013.4 server to TFS 2015 RTM this week. As opposed to an in-place upgrade we chose to make a few change on the way; so whilst leaving our DBs on our SQL 2012 cluster

• We moved to a new VM for our AT (to upgrade from Windows 2008R2 to 2012R2)
• Split the SSRS instance off the AT to a separate VM with a new SSAS server (again to move to 2012R2 and to ease management, getting all the reporting bits in one place)

But we do not touch

• Our XAML Build systems leaving them at 2013 as we intend to migrate to vNext build ASAP
• Our Test Controller/Release Management/Lab Environment leaving it at 2013 for now, as we have other projects on the go to update the hardware/cloud solutions underpinning theses.

All went well, no surprises, the running of the upgrade tool took about 1 hour.

### The Problem

The only problem we have had was to do with my TFS Alerts DSL Processor, which listens for TFS Alerts and runs custom scripts . I host this on the TFS AT, and I would expect it to set build retention and send emails when a TFS XAML Build quality changes. This did not occur, in the Windows error log  I was seeing

2015-08-12 21:04:02.4195 ERROR TFSEventsProcessor.DslScriptService: TF30063: You are not authorized to access https://tfs.blackmarble.co.uk/tfs/DefaultCollection.

After much fiddling, including writing a small command line test client, I confirmed that the issue was specific to the production server. The tool ran fine on other PCs, but on the live server a Window authentication dialog was shown which would not accept any valid credentials

It was not as I had feared a change in the TFS API, in fact there is no reason my 2012 or 2013 API targeted version of the TFS Alert DSL should not be able to talk to a TFS 2015 server as long as the correct version of the TFS API is installed on the machine hosting the DSL.

### The Solution

The issue was due to Windows loopback protection. This had been disabled on our old old TFS AT, but not on the new one. As we wanted to avoid changing the global loopback protection setting we set the following via Regedit to allow it for a single CName

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0
ValueName - BackConnectionHostNames
Type - multistring
Data  - tfs.blackmarble.co.uk

Once this was done(and without a reboot) my alerts processing work without any problems.

In my last post I discussed how you could wire TCM tests into a Release Management vNext pipeline. The problem with the script I provided, as I noted, was that the deployment was triggered synchronously by the build i.e. the build/release process was:

1. TFS Build
1. Gets the source
2. Compiled the code
3. Run the unit tests
4. Trigger the RM pipeline
5. Wait while the RM pipeline completed
2. RM then
1. Deploys the code
2. Runs the integration tests
3. When RM completed the TFS build completes

This process raised a couple of problems

• You cannot associate the integration tests with the build as TCM only allow association with completed successful builds. When TCM finishes in this model the build is still in progress.
• You have to target only the first automated stage of the pipeline, else the build will be held as ‘in progress’ until all the release stages have complete, which may be days if there are manual approvals involved

## The script InitiateReleaseFromBuild

These problems can all be fixed by altering the PowerShell that triggers the RM pipeline so that it does not wait for the deployment to complete, so the TFS build completes as soon as possible.

This is done by passing in an extra parameter which is set in TFS build

param(    [string]$rmserver =$Args[0],    [string]$port =$Args[1],      [string]$teamProject =$Args[2],       [string]$targetStageName =$Args[3],    [string]$waitForCompletion =$Args[4])
cls$teamFoundationServerUrl =$env:TF_BUILD_COLLECTIONURI$buildDefinition =$env:TF_BUILD_BUILDDEFINITIONNAME$buildNumber =$env:TF_BUILD_BUILDNUMBER
"Executing with the following parameters:n""  RMserver Name: $rmserver"" Port number:$port""  Team Foundation Server URL: $teamFoundationServerUrl"" Team Project:$teamProject""  Build Definition: $buildDefinition"" Build Number:$buildNumber""  Target Stage Name: $targetStageNamen"" Wait for RM completion:$waitForCompletionn"
$wait = [System.Convert]::ToBoolean($waitForCompletion)$exitCode = 0 trap{$e = $error[0].Exception$e.Message  $e.StackTrace if ($exitCode -eq 0) { $exitCode = 1 }}$scriptName = $MyInvocation.MyCommand.Name$scriptPath = Split-Path -Parent (Get-Variable MyInvocation -Scope Script).Value.MyCommand.Path
Push-Location $scriptPath$server = [System.Uri]::EscapeDataString($teamFoundationServerUrl)$project = [System.Uri]::EscapeDataString($teamProject)$definition = [System.Uri]::EscapeDataString($buildDefinition)$build = [System.Uri]::EscapeDataString($buildNumber)$targetStage = [System.Uri]::EscapeDataString($targetStageName)$serverName = $rmserver + ":" +$port$orchestratorService = "http://$serverName/account/releaseManagementService/_apis/releaseManagement/OrchestratorService"
$status = @{ "2" = "InProgress"; "3" = "Released"; "4" = "Stopped"; "5" = "Rejected"; "6" = "Abandoned";}$uri = "$orchestratorService/InitiateReleaseFromBuild?teamFoundationServerUrl=$server&teamProject=$project&buildDefinition=$definition&buildNumber=$build&targetStageName=$targetStage""Executing the following API call:nn$uri"$wc = New-Object System.Net.WebClient$wc.UseDefaultCredentials =$true# rmuser should be part rm users list and he should have permission to trigger the release.
#$wc.Credentials = new-object System.Net.NetworkCredential("rmuser", "rmuserpassword", "rmuserdomain") try{$releaseId = $wc.DownloadString($uri)
$url = "$orchestratorService/ReleaseStatus?releaseId=$releaseId"$releaseStatus = $wc.DownloadString($url)
if ($wait -eq$true)    {        Write-Host -NoNewline "nReleasing ..."
while($status[$releaseStatus] -eq "InProgress")        {            Start-Sleep -s 5            $releaseStatus =$wc.DownloadString($url) Write-Host -NoNewline "." } " done.nnRelease completed with {0} status." -f$status[$releaseStatus] } else { Write-Host -NoNewline "nTriggering Release and exiting" } }catch [System.Exception]{ if ($exitCode -eq 0) { $exitCode = 1 } Write-Host "n$_n" -ForegroundColor Red}
if ($exitCode -eq 0){ if ($wait -eq $true) { if ($releaseStatus -eq 3)        {          "nThe script completed successfully. Product deployed without errorn"        } else {            Write-Host "nThe script completed successfully. Product failed to deployn" -ForegroundColor Red            $exitCode = -1 # reset the code to show the error } } else { "nThe script completed successfully. Product deployingn" }}else{$err = "Exiting with error: " + $exitCode + "n" Write-Host$err -ForegroundColor Red}
Pop-Location
exit $exitCode ## The Script TcmExecWrapper A change is also required in the wrapper script I use to trigger the TCM test run. We need to check the exit code from the inner TCM PowerShell script and update the TFS build quality appropriately. To this I use the new REST API in TFS 2015 as this is far easier than using the older .NET client API. No DLLs to distribute. It is worth noticing that • I pass the credentials into the script from RM that are used to talk to the TFS server. This is because I am running my tests in a network isolated TFS Lab Environment, this means I am in the wrong domain to see the TFS server without providing login details. If you are not working cross domain you could just use Default Credentials. • RM only passes the BuildNumber into the script e.g. MyBuild_1.2.3.4, but the REST API need the build id to set the quality. Hence the need for function Get-BuildDetailsByNumber to get the id from the name # Output execution parameters.$VerbosePreference ='Continue' # equiv to -verbosefunction Get-BuildDetailsByNumber{    param    (        $tfsUri ,$buildNumber,        $username,$password    )    $uri = "$($tfsUri)/_apis/build/builds?api-version=2.0&buildnumber=$buildNumber"    $wc = New-Object System.Net.WebClient #$wc.UseDefaultCredentials = $true$wc.Credentials = new-object System.Net.NetworkCredential($username,$password)        write-verbose "Getting ID of $buildNumber from$tfsUri "    $jsondata =$wc.DownloadString($uri) | ConvertFrom-Json$jsondata.value[0]  }function Set-BuildQuality{    param    (        $tfsUri ,$buildID,        $quality,$username,         $password )$uri = "$($tfsUri)/_apis/build/builds/$($buildID)?api-version=1.0"    $data = @{quality =$quality} | ConvertTo-Json    $wc = New-Object System.Net.WebClient$wc.Headers["Content-Type"] = "application/json"    #$wc.UseDefaultCredentials =$true    $wc.Credentials = new-object System.Net.NetworkCredential($username, $password) write-verbose "Setting BuildID$buildID to quality $quality via$tfsUri "    $wc.UploadString($uri,"PATCH", $data) }$folder = Split-Path -Parent $MyInvocation.MyCommand.Definitionwrite-verbose "Running$folder\TcmExecWithLogin.ps1" & "$folder\TcmExecWithLogin.ps1" -Collection$Collection -Teamproject $Teamproject -PlanId$PlanId  -SuiteId $SuiteId -ConfigId$ConfigId -BuildDirectory $PackageLocation -TestEnvironment$TestEnvironment -LoginCreds "$TestUserUid,$TestUserPwd" -SettingsName $SettingsName -BuildNumber$BuildNumber -BuildDefinition $BuildDefinitionwrite-verbose "Got the exit code from the TCM run of$LASTEXITCODE"$url = "$Collection/$Teamproject"$jsondata = Get-BuildDetailsByNumber -tfsUri $url -buildNumber$BuildNumber -username $TestUserUid -password$TestUserPwd$buildId =$jsondata.idwrite-verbose "The build ID is $buildId"$newquality = "Test Passed"if ($LASTEXITCODE -gt 0 ){$newquality = "Test Failed"} write-verbose "The build quality is $newquality"Set-BuildQuality -tfsUri$url  -buildID $buildId -quality$newquality -username $TestUserUid -password$TestUserPwd

Note: TcmExecWithLogin.ps1 is the same as in the In my last post

## Summary

So with these changes the process is now

1. TFS Build
1. Gets the source
2. Compiled the code
3. Run the unit tests
4. Trigger the RM pipeline
5. Build ends
2. RM then
1. Deploys the code
2. Runs the integration tests
3. When the test complete we set the TFS build quality

This means we can associate both unit and integration tests with a build and target our release at any stage in the pipeline, it pausing at the points manual approval is required without blocking the initiating build.

Also see Part 2 on how to address gotcha's in this process

When using Release Management there is a good chance you will want to run test suites as part of your automated deployment pipeline. If you are using a vNext PowerShell based pipeline you need a way to trigger the tests via PowerShell as there is no out the box agent to do the job.

## Step 1 - Install a Test Agent

The first step is to make sure that the Visual Studio Test Agent is installed on the box you wish to run the test on. if you don’t already have a MTM Environment in place with a test agent then this can be done by creating a standard environment in Microsoft Test Manager. Remember you only need this environment to include the VM you want to run the test on, unless you want to also gather logs and events from our machines in the system. The complexity is up to you.

In my case I was using a network isolated environment so all this was already set up.

## Step 2 - Setup the Test Suite

Once you have an environment you can setup your test suite and test plan in MTM to include the tests you wish to run. These can be unit test style integration tests or Coded UI it is up to you.

If you have a lot of unit tests to associate for automation remember the TCM.EXE command can make your life a lot easier

This post does not aim to be a tutorial on setting up test plans, have a look at the ALM Rangers guides for more details.

## Step 3 -  The Release Management environment

This is where it gets a bit confusing, you have already set up a Lab Management environment, but you still need to setup the Release Management vNext environment. As I was using a network isolated Lab management environment this gets even more complex, but RM provides some tools to help

Again this is not a detailed tutorial. The key steps if you are using network isolation are

1. Make sure that PowerShell on the VM is setup for remote access by running  winrm quickconfig
2. In RM create a vNext environment
3. Add each a new server, using it’s corporate LAN name from Lab Management with the PowerShell remote access port e.g. VSLM-1002-e7858e28-77cf-4163-b6ba-1df2e91bfcab.lab.blackmarble.co.uk:5985
4. Make sure the server is set to use a shared UNC path for deployment.
5. Remember you will login to this VM with the credentials for the test domain.

By this point you might be a bit confused as to what you have, well here is a diagram

## Step 4  - Wiring the test into the pipeline

The final step is get the release pipeline to trigger the tests. This is done by calling the TCM.EXE command line to instruct the Test Controller trigger the tests. Now the copy of TCM does not have to be in Lab Management environment, but it does need to be on a VM known to RM vNext environment. This will usually mean a VM with Visual Studio Test Manager or Premium (or Enterprise for 2015) installed. In my case this was a dedicated test VM within the environment.

The key to the process is to run a script similar to the one used by the older RM agent based system to trigger the tests. You can extract this PowerShell script from an old release pipeline, but for ease I show my modified version here. The key changes are that I pass in the login credentials required for the call to the TFS server from TCM.EXE to be made from inside the network isolated environment and do a little extra checking of the test results so I can fail the build if the tests fail. These edits might not be required if you trigger TCM from a VM that is in the same domain as your TFS server, or have different success criteria.

param(    [string]$BuildDirectory =$null,    [string]$BuildDefinition =$null,    [string]$BuildNumber =$null,    [string]$TestEnvironment =$null,    [string]$LoginCreds =$null,    [string]$Collection =$(throw "The collection URL must be provided."),    [string]$TeamProject =$(throw "The team project must be provided."),    [Int]$PlanId =$(throw "The test plan ID must be provided."),    [Int]$SuiteId =$(throw "The test suite ID must be provided."),    [Int]$ConfigId =$(throw "The test configuration ID must be provided."),    [string]$Title = 'Automated UI Tests', [string]$SettingsName = $null, [Switch]$InconclusiveFailsTests = $false, [Switch]$RemoveIncludeParameter = $false, [Int]$TestRunWaitDelay = 10)
################################################################################### Output the logo.write-verbose "Based on the Microsoft Release Management TcmExec PowerShell Script v12.0"write-verbose "Copyright (c) 2013 Microsoft. All rights reserved.n" ################################################################################### Initialize the default script exit code.$exitCode = 1 ################################################################################### Output execution parameters.write-verbose "Executing with the following parameters:"write-verbose " Build Directory:$BuildDirectory"write-verbose "  Build Definition: $BuildDefinition"write-verbose " Build Number:$BuildNumber"write-verbose "  Test Environment: $TestEnvironment"write-verbose " Collection:$Collection"write-verbose "  Team project: $TeamProject"write-verbose " Plan ID:$PlanId"write-verbose "  Suite ID: $SuiteId"write-verbose " Configuration ID:$ConfigId"write-verbose "  Title: $Title"write-verbose " Settings Name:$SettingsName"write-verbose "  Inconclusive result fails tests: $InconclusiveFailsTests"write-verbose " Remove /include parameter from /create command:$RemoveIncludeParameter"write-verbose "  Test run wait delay: $TestRunWaitDelay" ################################################################################### Define globally used variables and constants.# Visual Studio 2013$vscommtools = [System.Environment]::GetEnvironmentVariable("VS120COMNTOOLS")if ($vscommtools -eq$null){    # Visual Studio 2012    $vscommtools = [System.Environment]::GetEnvironmentVariable("VS110COMNTOOLS")}if ($vscommtools -eq $null){ # Visual Studio 2010$vscommtools = [System.Environment]::GetEnvironmentVariable("VS100COMNTOOLS")    if ($vscommtools -ne$null)    {        if ([string]::IsNullOrEmpty($BuildDirectory)) {$(throw "The build directory must be provided.")        }        if (![string]::IsNullOrEmpty($BuildDefinition) -or ![string]::IsNullOrEmpty($BuildNumber))        {            $(throw "The build definition and build number parameters may be used only under Visual Studio 2012/2013.") } }}else{ if ([string]::IsNullOrEmpty($BuildDefinition) -and [string]::IsNullOrEmpty($BuildNumber) -and [string]::IsNullOrEmpty($BuildDirectory))    {        $(throw "You must specify the build directory or the build definition and build number.") }}$tcmExe = [System.IO.Path]::GetFullPath($vscommtools + "..\IDE\TCM.exe") ################################################################################### Ensure TCM.EXE is available in the assumed path.if ([System.IO.File]::Exists($tcmExe)){    ##################################################################################    # Prepare optional parameters.    $testEnvironmentParameter = "/testenvironment:$TestEnvironment"    if ([string]::IsNullOrEmpty($TestEnvironment)) {$testEnvironmentParameter = [string]::Empty    }    if ([string]::IsNullOrEmpty($BuildDirectory)) {$buildDirectoryParameter = [string]::Empty    } else    {        # make sure we remove any trailing slashes as the cause permission issues        $BuildDirectory =$BuildDirectory.Trim()        while ($BuildDirectory.EndsWith("\")) {$BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)        }        $buildDirectoryParameter = "/builddir:""$BuildDirectory"""        }    $buildDefinitionParameter = "/builddefinition:""$BuildDefinition"""    if ([string]::IsNullOrEmpty($BuildDefinition)) {$buildDefinitionParameter = [string]::Empty    }    $buildNumberParameter = "/build:""$BuildNumber"""    if ([string]::IsNullOrEmpty($BuildNumber)) {$buildNumberParameter = [string]::Empty    }    $includeParameter = '/include' if ($RemoveIncludeParameter)    {        $includeParameter = [string]::Empty }$settingsNameParameter = "/settingsname:""$SettingsName""" if ([string]::IsNullOrEmpty($SettingsName))    {        $settingsNameParameter = [string]::Empty } ################################################################################## # Create the test run. write-verbose "nCreating test run ..."$testRunId = & "$tcmExe" run /create /title:"$Title" /login:$LoginCreds /planid:$PlanId /suiteid:$SuiteId /configid:$ConfigId /collection:"$Collection" /teamproject:"$TeamProject" $testEnvironmentParameter$buildDirectoryParameter $buildDefinitionParameter$buildNumberParameter $settingsNameParameter$includeParameter    if ($testRunId -match '.+\:\s(?<TestRunId>\d+)\.') { # The test run ID is identified as a property in the match collection # so we can access it directly by using the group name from the regular # expression (i.e. TestRunId).$testRunId = $matches.TestRunId write-verbose "Waiting for test run$testRunId to complete ..."        $waitingForTestRunCompletion =$true        while ($waitingForTestRunCompletion) { Start-Sleep -s$TestRunWaitDelay            $testRunStatus = & "$tcmExe" run /list  /collection:"$collection" /login:$LoginCreds /teamproject:"$TeamProject" /querytext:"SELECT * FROM TestRun WHERE TestRunId=$testRunId"            if ($testRunStatus.Count -lt 3 -or ($testRunStatus.Count -gt 2 -and $testRunStatus.GetValue(2) -match '.+(?<DateCompleted>\d+[/]\d+[/]\d+)')) {$waitingForTestRunCompletion = $false } } write-verbose "Evaluating test run$testRunId results..."        # We do a small pause since the results might not be published yet.        Start-Sleep -s $TestRunWaitDelay$testRunResultsTrxFileName = "TestRunResults$testRunId.trx" & "$tcmExe" run /export /id:$testRunId /collection:"$collection" /login:$LoginCreds /teamproject:"$TeamProject" /resultsfile:"$testRunResultsTrxFileName" | Out-Null if (Test-path($testRunResultsTrxFileName))        {            # Load the XML document contents.            [xml]$testResultsXml = Get-Content "$testRunResultsTrxFileName"                        # Extract the results of the test run.            $total =$testResultsXml.TestRun.ResultSummary.Counters.total            $passed =$testResultsXml.TestRun.ResultSummary.Counters.passed            $failed =$testResultsXml.TestRun.ResultSummary.Counters.failed            $inconclusive =$testResultsXml.TestRun.ResultSummary.Counters.inconclusive
# Output the results of the test run.            write-verbose "n========== Test: $total tests ran,$passed succeeded, $failed failed,$inconclusive inconclusive =========="
# Determine if there were any failed tests during the test run execution.            if ($failed -eq 0 -and (-not$InconclusiveFailsTests -or $inconclusive -eq 0)) { # Update this script's exit code.$exitCode = 0            }
# Remove the test run results file.            remove-item($testRunResultsTrxFileName) | Out-Null } else { write-error "nERROR: Unable to export test run results file for analysis." } }}else{ write-error "nERROR: Unable to locate$tcmExe"}
################################################################################### Indicate the resulting exit code to the calling process.if ($exitCode -gt 0){ write-error "nERROR: Operation failed with error code$exitCode."}write-verbose "nDone."exit $exitCode Once this script is placed into source control in such a way that it ends up in the drops location for the build you can call it as a standard script item in your pipeline, targeting the VM that has TCM installed. Remember, you get the test environment name and various IDs required from MTM. Check the TCM command line for more details. However we hit a problem, RM sets PowerShell variable, not the parameters for script . So I find it easiest to use a wrapper script, also stored in source control, that converts the variable to the needed parameters. This also gives the opportunity to use RM set runtime variables and build more complex objects such as the credentials # Output execution parameters.$VerbosePreference ='Continue' # equiv to -verbose$folder = Split-Path -Parent$MyInvocation.MyCommand.Definition
write-verbose "Running $folder\TcmExecWithLogin.ps1" & "$folder\TcmExecWithLogin.ps1" -Collection $Collection -Teamproject$Teamproject -PlanId $PlanId -SuiteId$SuiteId -ConfigId $ConfigId -BuildDirectory$PackageLocation -TestEnvironment $TestEnvironment -LoginCreds "$TestUserUid,$TestUserPwd" -SettingsName$SettingsName

## Step 5 – Run it all

If you have everything in place you should now be able to trigger your deployment and have the tests run.

## Finishing Up and One final gotcha

I had hoped that my integration test run would be associated with my build. Normally when triggering test via TCM you do this by adding the following parameters to the TCM command line

TCM [all the other params] -BuildNumber 'My.Build.CI_1.7.25.29773' -BuildDefinition 'My.Build.CI' 

However this will not work in the scenario above. This is because you can only use these flags to associate with successful builds, at the time TCM is run in the pipeline the build has not finished so it is not marked as successful. This does somewhat limit the end to end reporting. However, I think for now I can accept this limitation as the deployment completing is a suitable marker that the tests were passed.

The only workaround I can think is not to trigger the release directly from the build but to use the TFS events system to allow the build to finish first then trigger the release. You could use my TFS DSL Alert processor for that.

A few days in and I have solved the few problems I have had

## Can apply update Security Update for Windows 10 for x64-based Systems (KB3074683)

My system tried to apply the KB3074683 patch a couple of time, rolling it back each time. A search of the forums found the answer to this one. As in the forum post I have an Nvidia video card, in fact it caused the problems during the update, so the fix was to delete the UpdatusUser registry entry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList.

Once this was deleted the update applied without an issues.

## Windows Defender won’t start

Every time my PC started I got the error that Windows Defender would not start.

After much searching and fiddling with settings, it turned out this was a red herring. Defender was not starting as I had another AV product in place System Center End Point Protection, just as the dialog said. End Point Protection is installed by our IT team as part of our standard setup. So the actual issue was that the Defender tooltray app was trying to autostart, giving the error as it failed to connect to the background services which were not running. Strange as this appeared not to be an issue for Windows 8.1.

## Can access a Data DeDup’d disk

On Windows 8.1 I use the Data DeDup hack on one of my disks that I use for Hyper-V VM; I got 71% disk space saving as there is so much common data between the various VMs. At the time of writing I could not find a matching set of DSIM packages for Windows 10, they need to come from the equivalent release of Server 2016, which is still in CTP/Preview.

After some fiddling with feature packs from preview builds,  I decided to just stop using Data DeDup feature for now. So I attached my disk to a 8.1 machine with DeDup enabled, copied the contents off, re-formated the disk and the replaced the data. then put the disk back in my laptop.

I do hope Microsoft choose to add Data DeDup to Windows 10 in the future, it is of great use to me and anyone else who uses plenty of local VMs.

So I think I am there now, let us see how reliable it is day to day.

I have just done an in place upgrade on my Lenovo W520 from Windows 8.1 to Windows 10. Something I had not tried during the beta programme, sticking to running Windows 10 in VMs (mostly on Azure).

I have to say the process was pretty smooth. I only hit one issue, and this was the usual NVidia Optimus problems I saw installing Windows 8 and 8.1.

This is what happened

1. With Windows 8.1 running mounted the Windows 10 Enterprise ISO
2. Ran the setup
3. It did a few checks and eventually asked if I wanted to keep everything – I said yes
4. It showed a percentage complete gauge
1. It copied files OK (about 30%)
2. It said it had found 5% of drivers (32% overall) and stopped – I left it a couple of hours, no disk or network activity

At this point I was a bit worried. But guessed it was the same problem as I had seen on Windows 8.x; the installer needs to access the Intel GPU as well as the NVidia GPU else it gets confused and hangs. A disabled GPU is not an removed GPU.

So I

1. I rebooted (via the power switch)
2. Boot into BIOS (press the ThinkVantage button)
1. Selected the Enable Nvidia Optimus in the graphics options
2. Saved and rebooted
3. The PC rolled back the Windows 10 update (very quickly, less than 5 minutes)
Note: I had expected to be challenged for a Bitlocker code due to the BIOS setting change during the reboot but I wasn’t
4. With Windows 8.1 running again I re-mounted the Windows 10 Enterprise ISO
5. Ran the setup again
6. It did the same few checks and eventually asked if I wanted to keep everything – I said yes again
7. This time it completed without error, it took around an hour

So now I had an upgraded PC, and everything seemed OK. Including my Biometric login – I was surprised me as this had been a problem to setup in the past.

Only issue was with my external screen, so went back into the BIOS to disable NVidia Optimus again. This time it did prompt me to re-enter the Bitlocker key. Once this was done I could use external screens with no issues as before.

So a smooth upgrade from our standard Windows 8.1 dev machine image, a good stop gap until our IT team build a Windows 10 image in Systems Center.

If you are using basic PowerShell scripts as opposed to DSC with Release Management there are a few gotcha’s I have found.

## You cannot pass parameters

Lets look at a sample script that we would like to run via Release Manager

param(    $param1 ) write-verbose -verbose "Start"write-verbose -verbose "Got var1 [$var1]"write-verbose -verbose "Got param1 [$param1]"write-verbose -verbose "End" In Release Manager we have the following vNext workflow You can see we are setting two custom values which we intend to use within our script, one is a script parameter (Param1), the other one is just a global variable (Var1). If we do a deployment we get the log Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\152 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.StartGot var1 [XXXvar1]Got param1 []End You can see the problem,$var1 is set, $param1 is not. Took me a while to get my head around this, the problem is the RM activity’s PSSCriptPath is just that a script path, not a command line that will be executed. Unlike the PowerShell activities in the vNext build tools you don’t have a pair of settings, one for the path to the script and another for the arguments. Here we have no ways to set the command line arguments. Note: The PSConfigurationPath is just for DSC configurations as discussed elsewhere. So in effect the Param1 is not set, as we did not call test -param1 “some value” This means there is no point using parameters in the script you wish to use with RM vNext. But wait, I bet you are thinking ‘I want to run my script externally to Release Manager to test it, and using parameters with validation rules is best practice, I don’t want to loose that advantage The best workaround I have found is to use a wrapper script that takes the variable and makes them parameters, something like this $folder = Split-Path -Parent $MyInvocation.MyCommand.Definition&$folder\test.ps1 -param1 $param1 Another Gotcha Note that I need to find the path the wrapper script is running in and use it to build the path to my actual script. If I don’t do this I get that the test.ps1 script can’t be found. After altering my pipeline to use the wrapper and rerunning the deployment I get the log file I wanted Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\160 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.StartGot var1 [XXXvar1]Got param1 [XXXparam1]End  This is all a bit ugly, but works. Looking forward this appears to not be too much of an issue. The next version of Release Management as shown at Build is based around the vNext TFS build tooling which seems to always allow you to pass true PowerShell command line arguments. So this problem should go away in the not too distant future. ## Don’t write to the console The other big problem is any script that writes or reads from the console. Usually this means a write-host call in a script that causes an error along the lines A command that prompts the user failed because the host program or the command type does not support user interaction. Try a host program that supports user interaction, such as the Windows PowerShell Console or Windows PowerShell ISE, and remove prompt-related commands from command types that do not support user interaction, such as Windows PowerShell workflows. +At C:\Windows\DtlDownloads\ISS vNext Drops\scripts\test.ps1:7 char:1+ Write-Host "hello 1" -ForegroundColor red But also watch out for any CLS calls, that has caught me out. I have found the it can be hard to track down the offending lines, especially if there are PowerShell modules loading modules. The best recommendation is to just use write-verbose and write-error. • write-error if your script has errored. This will let RM know the script has failed, thus failing the deployment – just what we want • write-verbose for any logging Any other form of PowerShell output will not be passed to RM, be warned! You might also notice in my sample script that I am passing the –verbose argument to the write-verbose command, again you have to have this maximal level of logging on for the messages to make it out to the RM logs. Probably a better solution, if you think you might vary the level of logging, is to change the script to set the$VerbosePreference

param(    $param1 )$VerbosePreference ='Continue' # equiv to -verbose
write-verbose "Start"write-verbose "Got var1 [$var1]"write-verbose "Got param1 [$param1]"write-verbose "End"`

So hopefully a few pointers to make your deployments a bit smoother