# But it works on my PC!

### Background

We  upgraded our production TFS 2013.4 server to TFS 2015 RTM this week. As opposed to an in-place upgrade we chose to make a few change on the way; so whilst leaving our DBs on our SQL 2012 cluster

• We moved to a new VM for our AT (to upgrade from Windows 2008R2 to 2012R2)
• Split the SSRS instance off the AT to a separate VM with a new SSAS server (again to move to 2012R2 and to ease management, getting all the reporting bits in one place)

But we do not touch

• Our XAML Build systems leaving them at 2013 as we intend to migrate to vNext build ASAP
• Our Test Controller/Release Management/Lab Environment leaving it at 2013 for now, as we have other projects on the go to update the hardware/cloud solutions underpinning theses.

All went well, no surprises, the running of the upgrade tool took about 1 hour.

### The Problem

The only problem we have had was to do with my TFS Alerts DSL Processor, which listens for TFS Alerts and runs custom scripts . I host this on the TFS AT, and I would expect it to set build retention and send emails when a TFS XAML Build quality changes. This did not occur, in the Windows error log  I was seeing

2015-08-12 21:04:02.4195 ERROR TFSEventsProcessor.DslScriptService: TF30063: You are not authorized to access https://tfs.blackmarble.co.uk/tfs/DefaultCollection.

After much fiddling, including writing a small command line test client, I confirmed that the issue was specific to the production server. The tool ran fine on other PCs, but on the live server a Window authentication dialog was shown which would not accept any valid credentials

It was not as I had feared a change in the TFS API, in fact there is no reason my 2012 or 2013 API targeted version of the TFS Alert DSL should not be able to talk to a TFS 2015 server as long as the correct version of the TFS API is installed on the machine hosting the DSL.

### The Solution

The issue was due to Windows loopback protection. This had been disabled on our old old TFS AT, but not on the new one. As we wanted to avoid changing the global loopback protection setting we set the following via Regedit to allow it for a single CName

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0
ValueName - BackConnectionHostNames
Type - multistring
Data  - tfs.blackmarble.co.uk

Once this was done(and without a reboot) my alerts processing work without any problems.

In my last post I discussed how you could wire TCM tests into a Release Management vNext pipeline. The problem with the script I provided, as I noted, was that the deployment was triggered synchronously by the build i.e. the build/release process was:

1. TFS Build
1. Gets the source
2. Compiled the code
3. Run the unit tests
4. Trigger the RM pipeline
5. Wait while the RM pipeline completed
2. RM then
1. Deploys the code
2. Runs the integration tests
3. When RM completed the TFS build completes

This process raised a couple of problems

• You cannot associate the integration tests with the build as TCM only allow association with completed successful builds. When TCM finishes in this model the build is still in progress.
• You have to target only the first automated stage of the pipeline, else the build will be held as ‘in progress’ until all the release stages have complete, which may be days if there are manual approvals involved

## The script InitiateReleaseFromBuild

These problems can all be fixed by altering the PowerShell that triggers the RM pipeline so that it does not wait for the deployment to complete, so the TFS build completes as soon as possible.

This is done by passing in an extra parameter which is set in TFS build

param(    [string]$rmserver =$Args[0],    [string]$port =$Args[1],      [string]$teamProject =$Args[2],       [string]$targetStageName =$Args[3],    [string]$waitForCompletion =$Args[4])
cls$teamFoundationServerUrl =$env:TF_BUILD_COLLECTIONURI$buildDefinition =$env:TF_BUILD_BUILDDEFINITIONNAME$buildNumber =$env:TF_BUILD_BUILDNUMBER
"Executing with the following parameters:n""  RMserver Name: $rmserver"" Port number:$port""  Team Foundation Server URL: $teamFoundationServerUrl"" Team Project:$teamProject""  Build Definition: $buildDefinition"" Build Number:$buildNumber""  Target Stage Name: $targetStageNamen"" Wait for RM completion:$waitForCompletionn"
$wait = [System.Convert]::ToBoolean($waitForCompletion)$exitCode = 0 trap{$e = $error[0].Exception$e.Message  $e.StackTrace if ($exitCode -eq 0) { $exitCode = 1 }}$scriptName = $MyInvocation.MyCommand.Name$scriptPath = Split-Path -Parent (Get-Variable MyInvocation -Scope Script).Value.MyCommand.Path
Push-Location $scriptPath$server = [System.Uri]::EscapeDataString($teamFoundationServerUrl)$project = [System.Uri]::EscapeDataString($teamProject)$definition = [System.Uri]::EscapeDataString($buildDefinition)$build = [System.Uri]::EscapeDataString($buildNumber)$targetStage = [System.Uri]::EscapeDataString($targetStageName)$serverName = $rmserver + ":" +$port$orchestratorService = "http://$serverName/account/releaseManagementService/_apis/releaseManagement/OrchestratorService"
$status = @{ "2" = "InProgress"; "3" = "Released"; "4" = "Stopped"; "5" = "Rejected"; "6" = "Abandoned";}$uri = "$orchestratorService/InitiateReleaseFromBuild?teamFoundationServerUrl=$server&teamProject=$project&buildDefinition=$definition&buildNumber=$build&targetStageName=$targetStage""Executing the following API call:nn$uri"$wc = New-Object System.Net.WebClient$wc.UseDefaultCredentials =$true# rmuser should be part rm users list and he should have permission to trigger the release.
#$wc.Credentials = new-object System.Net.NetworkCredential("rmuser", "rmuserpassword", "rmuserdomain") try{$releaseId = $wc.DownloadString($uri)
$url = "$orchestratorService/ReleaseStatus?releaseId=$releaseId"$releaseStatus = $wc.DownloadString($url)
if ($wait -eq$true)    {        Write-Host -NoNewline "nReleasing ..."
while($status[$releaseStatus] -eq "InProgress")        {            Start-Sleep -s 5            $releaseStatus =$wc.DownloadString($url) Write-Host -NoNewline "." } " done.nnRelease completed with {0} status." -f$status[$releaseStatus] } else { Write-Host -NoNewline "nTriggering Release and exiting" } }catch [System.Exception]{ if ($exitCode -eq 0) { $exitCode = 1 } Write-Host "n$_n" -ForegroundColor Red}
if ($exitCode -eq 0){ if ($wait -eq $true) { if ($releaseStatus -eq 3)        {          "nThe script completed successfully. Product deployed without errorn"        } else {            Write-Host "nThe script completed successfully. Product failed to deployn" -ForegroundColor Red            $exitCode = -1 # reset the code to show the error } } else { "nThe script completed successfully. Product deployingn" }}else{$err = "Exiting with error: " + $exitCode + "n" Write-Host$err -ForegroundColor Red}
Pop-Location
exit $exitCode ## The Script TcmExecWrapper A change is also required in the wrapper script I use to trigger the TCM test run. We need to check the exit code from the inner TCM PowerShell script and update the TFS build quality appropriately. To this I use the new REST API in TFS 2015 as this is far easier than using the older .NET client API. No DLLs to distribute. It is worth noticing that • I pass the credentials into the script from RM that are used to talk to the TFS server. This is because I am running my tests in a network isolated TFS Lab Environment, this means I am in the wrong domain to see the TFS server without providing login details. If you are not working cross domain you could just use Default Credentials. • RM only passes the BuildNumber into the script e.g. MyBuild_1.2.3.4, but the REST API need the build id to set the quality. Hence the need for function Get-BuildDetailsByNumber to get the id from the name # Output execution parameters.$VerbosePreference ='Continue' # equiv to -verbosefunction Get-BuildDetailsByNumber{    param    (        $tfsUri ,$buildNumber,        $username,$password    )    $uri = "$($tfsUri)/_apis/build/builds?api-version=2.0&buildnumber=$buildNumber"    $wc = New-Object System.Net.WebClient #$wc.UseDefaultCredentials = $true$wc.Credentials = new-object System.Net.NetworkCredential($username,$password)        write-verbose "Getting ID of $buildNumber from$tfsUri "    $jsondata =$wc.DownloadString($uri) | ConvertFrom-Json$jsondata.value[0]  }function Set-BuildQuality{    param    (        $tfsUri ,$buildID,        $quality,$username,         $password )$uri = "$($tfsUri)/_apis/build/builds/$($buildID)?api-version=1.0"    $data = @{quality =$quality} | ConvertTo-Json    $wc = New-Object System.Net.WebClient$wc.Headers["Content-Type"] = "application/json"    #$wc.UseDefaultCredentials =$true    $wc.Credentials = new-object System.Net.NetworkCredential($username, $password) write-verbose "Setting BuildID$buildID to quality $quality via$tfsUri "    $wc.UploadString($uri,"PATCH", $data) }$folder = Split-Path -Parent $MyInvocation.MyCommand.Definitionwrite-verbose "Running$folder\TcmExecWithLogin.ps1" & "$folder\TcmExecWithLogin.ps1" -Collection$Collection -Teamproject $Teamproject -PlanId$PlanId  -SuiteId $SuiteId -ConfigId$ConfigId -BuildDirectory $PackageLocation -TestEnvironment$TestEnvironment -LoginCreds "$TestUserUid,$TestUserPwd" -SettingsName $SettingsName -BuildNumber$BuildNumber -BuildDefinition $BuildDefinitionwrite-verbose "Got the exit code from the TCM run of$LASTEXITCODE"$url = "$Collection/$Teamproject"$jsondata = Get-BuildDetailsByNumber -tfsUri $url -buildNumber$BuildNumber -username $TestUserUid -password$TestUserPwd$buildId =$jsondata.idwrite-verbose "The build ID is $buildId"$newquality = "Test Passed"if ($LASTEXITCODE -gt 0 ){$newquality = "Test Failed"} write-verbose "The build quality is $newquality"Set-BuildQuality -tfsUri$url  -buildID $buildId -quality$newquality -username $TestUserUid -password$TestUserPwd

Note: TcmExecWithLogin.ps1 is the same as in the In my last post

## Summary

So with these changes the process is now

1. TFS Build
1. Gets the source
2. Compiled the code
3. Run the unit tests
4. Trigger the RM pipeline
5. Build ends
2. RM then
1. Deploys the code
2. Runs the integration tests
3. When the test complete we set the TFS build quality

This means we can associate both unit and integration tests with a build and target our release at any stage in the pipeline, it pausing at the points manual approval is required without blocking the initiating build.

Also see Part 2 on how to address gotcha's in this process

When using Release Management there is a good chance you will want to run test suites as part of your automated deployment pipeline. If you are using a vNext PowerShell based pipeline you need a way to trigger the tests via PowerShell as there is no out the box agent to do the job.

## Step 1 - Install a Test Agent

The first step is to make sure that the Visual Studio Test Agent is installed on the box you wish to run the test on. if you don’t already have a MTM Environment in place with a test agent then this can be done by creating a standard environment in Microsoft Test Manager. Remember you only need this environment to include the VM you want to run the test on, unless you want to also gather logs and events from our machines in the system. The complexity is up to you.

In my case I was using a network isolated environment so all this was already set up.

## Step 2 - Setup the Test Suite

Once you have an environment you can setup your test suite and test plan in MTM to include the tests you wish to run. These can be unit test style integration tests or Coded UI it is up to you.

If you have a lot of unit tests to associate for automation remember the TCM.EXE command can make your life a lot easier

This post does not aim to be a tutorial on setting up test plans, have a look at the ALM Rangers guides for more details.

## Step 3 -  The Release Management environment

This is where it gets a bit confusing, you have already set up a Lab Management environment, but you still need to setup the Release Management vNext environment. As I was using a network isolated Lab management environment this gets even more complex, but RM provides some tools to help

Again this is not a detailed tutorial. The key steps if you are using network isolation are

1. Make sure that PowerShell on the VM is setup for remote access by running  winrm quickconfig
2. In RM create a vNext environment
3. Add each a new server, using it’s corporate LAN name from Lab Management with the PowerShell remote access port e.g. VSLM-1002-e7858e28-77cf-4163-b6ba-1df2e91bfcab.lab.blackmarble.co.uk:5985
4. Make sure the server is set to use a shared UNC path for deployment.
5. Remember you will login to this VM with the credentials for the test domain.

By this point you might be a bit confused as to what you have, well here is a diagram

## Step 4  - Wiring the test into the pipeline

The final step is get the release pipeline to trigger the tests. This is done by calling the TCM.EXE command line to instruct the Test Controller trigger the tests. Now the copy of TCM does not have to be in Lab Management environment, but it does need to be on a VM known to RM vNext environment. This will usually mean a VM with Visual Studio Test Manager or Premium (or Enterprise for 2015) installed. In my case this was a dedicated test VM within the environment.

The key to the process is to run a script similar to the one used by the older RM agent based system to trigger the tests. You can extract this PowerShell script from an old release pipeline, but for ease I show my modified version here. The key changes are that I pass in the login credentials required for the call to the TFS server from TCM.EXE to be made from inside the network isolated environment and do a little extra checking of the test results so I can fail the build if the tests fail. These edits might not be required if you trigger TCM from a VM that is in the same domain as your TFS server, or have different success criteria.

param(    [string]$BuildDirectory =$null,    [string]$BuildDefinition =$null,    [string]$BuildNumber =$null,    [string]$TestEnvironment =$null,    [string]$LoginCreds =$null,    [string]$Collection =$(throw "The collection URL must be provided."),    [string]$TeamProject =$(throw "The team project must be provided."),    [Int]$PlanId =$(throw "The test plan ID must be provided."),    [Int]$SuiteId =$(throw "The test suite ID must be provided."),    [Int]$ConfigId =$(throw "The test configuration ID must be provided."),    [string]$Title = 'Automated UI Tests', [string]$SettingsName = $null, [Switch]$InconclusiveFailsTests = $false, [Switch]$RemoveIncludeParameter = $false, [Int]$TestRunWaitDelay = 10)
################################################################################### Output the logo.write-verbose "Based on the Microsoft Release Management TcmExec PowerShell Script v12.0"write-verbose "Copyright (c) 2013 Microsoft. All rights reserved.n" ################################################################################### Initialize the default script exit code.$exitCode = 1 ################################################################################### Output execution parameters.write-verbose "Executing with the following parameters:"write-verbose " Build Directory:$BuildDirectory"write-verbose "  Build Definition: $BuildDefinition"write-verbose " Build Number:$BuildNumber"write-verbose "  Test Environment: $TestEnvironment"write-verbose " Collection:$Collection"write-verbose "  Team project: $TeamProject"write-verbose " Plan ID:$PlanId"write-verbose "  Suite ID: $SuiteId"write-verbose " Configuration ID:$ConfigId"write-verbose "  Title: $Title"write-verbose " Settings Name:$SettingsName"write-verbose "  Inconclusive result fails tests: $InconclusiveFailsTests"write-verbose " Remove /include parameter from /create command:$RemoveIncludeParameter"write-verbose "  Test run wait delay: $TestRunWaitDelay" ################################################################################### Define globally used variables and constants.# Visual Studio 2013$vscommtools = [System.Environment]::GetEnvironmentVariable("VS120COMNTOOLS")if ($vscommtools -eq$null){    # Visual Studio 2012    $vscommtools = [System.Environment]::GetEnvironmentVariable("VS110COMNTOOLS")}if ($vscommtools -eq $null){ # Visual Studio 2010$vscommtools = [System.Environment]::GetEnvironmentVariable("VS100COMNTOOLS")    if ($vscommtools -ne$null)    {        if ([string]::IsNullOrEmpty($BuildDirectory)) {$(throw "The build directory must be provided.")        }        if (![string]::IsNullOrEmpty($BuildDefinition) -or ![string]::IsNullOrEmpty($BuildNumber))        {            $(throw "The build definition and build number parameters may be used only under Visual Studio 2012/2013.") } }}else{ if ([string]::IsNullOrEmpty($BuildDefinition) -and [string]::IsNullOrEmpty($BuildNumber) -and [string]::IsNullOrEmpty($BuildDirectory))    {        $(throw "You must specify the build directory or the build definition and build number.") }}$tcmExe = [System.IO.Path]::GetFullPath($vscommtools + "..\IDE\TCM.exe") ################################################################################### Ensure TCM.EXE is available in the assumed path.if ([System.IO.File]::Exists($tcmExe)){    ##################################################################################    # Prepare optional parameters.    $testEnvironmentParameter = "/testenvironment:$TestEnvironment"    if ([string]::IsNullOrEmpty($TestEnvironment)) {$testEnvironmentParameter = [string]::Empty    }    if ([string]::IsNullOrEmpty($BuildDirectory)) {$buildDirectoryParameter = [string]::Empty    } else    {        # make sure we remove any trailing slashes as the cause permission issues        $BuildDirectory =$BuildDirectory.Trim()        while ($BuildDirectory.EndsWith("\")) {$BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)        }        $buildDirectoryParameter = "/builddir:""$BuildDirectory"""        }    $buildDefinitionParameter = "/builddefinition:""$BuildDefinition"""    if ([string]::IsNullOrEmpty($BuildDefinition)) {$buildDefinitionParameter = [string]::Empty    }    $buildNumberParameter = "/build:""$BuildNumber"""    if ([string]::IsNullOrEmpty($BuildNumber)) {$buildNumberParameter = [string]::Empty    }    $includeParameter = '/include' if ($RemoveIncludeParameter)    {        $includeParameter = [string]::Empty }$settingsNameParameter = "/settingsname:""$SettingsName""" if ([string]::IsNullOrEmpty($SettingsName))    {        $settingsNameParameter = [string]::Empty } ################################################################################## # Create the test run. write-verbose "nCreating test run ..."$testRunId = & "$tcmExe" run /create /title:"$Title" /login:$LoginCreds /planid:$PlanId /suiteid:$SuiteId /configid:$ConfigId /collection:"$Collection" /teamproject:"$TeamProject" $testEnvironmentParameter$buildDirectoryParameter $buildDefinitionParameter$buildNumberParameter $settingsNameParameter$includeParameter    if ($testRunId -match '.+\:\s(?<TestRunId>\d+)\.') { # The test run ID is identified as a property in the match collection # so we can access it directly by using the group name from the regular # expression (i.e. TestRunId).$testRunId = $matches.TestRunId write-verbose "Waiting for test run$testRunId to complete ..."        $waitingForTestRunCompletion =$true        while ($waitingForTestRunCompletion) { Start-Sleep -s$TestRunWaitDelay            $testRunStatus = & "$tcmExe" run /list  /collection:"$collection" /login:$LoginCreds /teamproject:"$TeamProject" /querytext:"SELECT * FROM TestRun WHERE TestRunId=$testRunId"            if ($testRunStatus.Count -lt 3 -or ($testRunStatus.Count -gt 2 -and $testRunStatus.GetValue(2) -match '.+(?<DateCompleted>\d+[/]\d+[/]\d+)')) {$waitingForTestRunCompletion = $false } } write-verbose "Evaluating test run$testRunId results..."        # We do a small pause since the results might not be published yet.        Start-Sleep -s $TestRunWaitDelay$testRunResultsTrxFileName = "TestRunResults$testRunId.trx" & "$tcmExe" run /export /id:$testRunId /collection:"$collection" /login:$LoginCreds /teamproject:"$TeamProject" /resultsfile:"$testRunResultsTrxFileName" | Out-Null if (Test-path($testRunResultsTrxFileName))        {            # Load the XML document contents.            [xml]$testResultsXml = Get-Content "$testRunResultsTrxFileName"                        # Extract the results of the test run.            $total =$testResultsXml.TestRun.ResultSummary.Counters.total            $passed =$testResultsXml.TestRun.ResultSummary.Counters.passed            $failed =$testResultsXml.TestRun.ResultSummary.Counters.failed            $inconclusive =$testResultsXml.TestRun.ResultSummary.Counters.inconclusive
# Output the results of the test run.            write-verbose "n========== Test: $total tests ran,$passed succeeded, $failed failed,$inconclusive inconclusive =========="
# Determine if there were any failed tests during the test run execution.            if ($failed -eq 0 -and (-not$InconclusiveFailsTests -or $inconclusive -eq 0)) { # Update this script's exit code.$exitCode = 0            }
# Remove the test run results file.            remove-item($testRunResultsTrxFileName) | Out-Null } else { write-error "nERROR: Unable to export test run results file for analysis." } }}else{ write-error "nERROR: Unable to locate$tcmExe"}
################################################################################### Indicate the resulting exit code to the calling process.if ($exitCode -gt 0){ write-error "nERROR: Operation failed with error code$exitCode."}write-verbose "nDone."exit $exitCode Once this script is placed into source control in such a way that it ends up in the drops location for the build you can call it as a standard script item in your pipeline, targeting the VM that has TCM installed. Remember, you get the test environment name and various IDs required from MTM. Check the TCM command line for more details. However we hit a problem, RM sets PowerShell variable, not the parameters for script . So I find it easiest to use a wrapper script, also stored in source control, that converts the variable to the needed parameters. This also gives the opportunity to use RM set runtime variables and build more complex objects such as the credentials # Output execution parameters.$VerbosePreference ='Continue' # equiv to -verbose$folder = Split-Path -Parent$MyInvocation.MyCommand.Definition
write-verbose "Running $folder\TcmExecWithLogin.ps1" & "$folder\TcmExecWithLogin.ps1" -Collection $Collection -Teamproject$Teamproject -PlanId $PlanId -SuiteId$SuiteId -ConfigId $ConfigId -BuildDirectory$PackageLocation -TestEnvironment $TestEnvironment -LoginCreds "$TestUserUid,$TestUserPwd" -SettingsName$SettingsName

## Step 5 – Run it all

If you have everything in place you should now be able to trigger your deployment and have the tests run.

## Finishing Up and One final gotcha

I had hoped that my integration test run would be associated with my build. Normally when triggering test via TCM you do this by adding the following parameters to the TCM command line

TCM [all the other params] -BuildNumber 'My.Build.CI_1.7.25.29773' -BuildDefinition 'My.Build.CI' 

However this will not work in the scenario above. This is because you can only use these flags to associate with successful builds, at the time TCM is run in the pipeline the build has not finished so it is not marked as successful. This does somewhat limit the end to end reporting. However, I think for now I can accept this limitation as the deployment completing is a suitable marker that the tests were passed.

The only workaround I can think is not to trigger the release directly from the build but to use the TFS events system to allow the build to finish first then trigger the release. You could use my TFS DSL Alert processor for that.

If you are using basic PowerShell scripts as opposed to DSC with Release Management there are a few gotcha’s I have found.

## You cannot pass parameters

Lets look at a sample script that we would like to run via Release Manager

param(    $param1 ) write-verbose -verbose "Start"write-verbose -verbose "Got var1 [$var1]"write-verbose -verbose "Got param1 [$param1]"write-verbose -verbose "End" In Release Manager we have the following vNext workflow You can see we are setting two custom values which we intend to use within our script, one is a script parameter (Param1), the other one is just a global variable (Var1). If we do a deployment we get the log Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\152 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.StartGot var1 [XXXvar1]Got param1 []End You can see the problem,$var1 is set, $param1 is not. Took me a while to get my head around this, the problem is the RM activity’s PSSCriptPath is just that a script path, not a command line that will be executed. Unlike the PowerShell activities in the vNext build tools you don’t have a pair of settings, one for the path to the script and another for the arguments. Here we have no ways to set the command line arguments. Note: The PSConfigurationPath is just for DSC configurations as discussed elsewhere. So in effect the Param1 is not set, as we did not call test -param1 “some value” This means there is no point using parameters in the script you wish to use with RM vNext. But wait, I bet you are thinking ‘I want to run my script externally to Release Manager to test it, and using parameters with validation rules is best practice, I don’t want to loose that advantage The best workaround I have found is to use a wrapper script that takes the variable and makes them parameters, something like this $folder = Split-Path -Parent $MyInvocation.MyCommand.Definition&$folder\test.ps1 -param1 $param1 Another Gotcha Note that I need to find the path the wrapper script is running in and use it to build the path to my actual script. If I don’t do this I get that the test.ps1 script can’t be found. After altering my pipeline to use the wrapper and rerunning the deployment I get the log file I wanted Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\160 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.StartGot var1 [XXXvar1]Got param1 [XXXparam1]End  This is all a bit ugly, but works. Looking forward this appears to not be too much of an issue. The next version of Release Management as shown at Build is based around the vNext TFS build tooling which seems to always allow you to pass true PowerShell command line arguments. So this problem should go away in the not too distant future. ## Don’t write to the console The other big problem is any script that writes or reads from the console. Usually this means a write-host call in a script that causes an error along the lines A command that prompts the user failed because the host program or the command type does not support user interaction. Try a host program that supports user interaction, such as the Windows PowerShell Console or Windows PowerShell ISE, and remove prompt-related commands from command types that do not support user interaction, such as Windows PowerShell workflows. +At C:\Windows\DtlDownloads\ISS vNext Drops\scripts\test.ps1:7 char:1+ Write-Host "hello 1" -ForegroundColor red But also watch out for any CLS calls, that has caught me out. I have found the it can be hard to track down the offending lines, especially if there are PowerShell modules loading modules. The best recommendation is to just use write-verbose and write-error. • write-error if your script has errored. This will let RM know the script has failed, thus failing the deployment – just what we want • write-verbose for any logging Any other form of PowerShell output will not be passed to RM, be warned! You might also notice in my sample script that I am passing the –verbose argument to the write-verbose command, again you have to have this maximal level of logging on for the messages to make it out to the RM logs. Probably a better solution, if you think you might vary the level of logging, is to change the script to set the$VerbosePreference

param(    $param1 )$VerbosePreference ='Continue' # equiv to -verbose
write-verbose "Start"write-verbose "Got var1 [$var1]"write-verbose "Got param1 [$param1]"write-verbose "End"

So hopefully a few pointers to make your deployments a bit smoother

With the release of Visual Studio 2015 there are some significant changes to Visual Studio and TFS licensing, you can find the details of Brian Harry’s blog. These changes can make a serious change in what you need to purchase for different roles, so it could well be worth a look.

If you are providing a path to a custom test adaptor such as nUnit or Chutzpah for a TFS/VSO vNext build e.g. $(Build.SourcesDirectory)\packages, make sure you have no leading whitespace in the data entry form. If you do have a space you will see an error log like this as the adaptor cannot be found as the command line generated is malformed 2015-07-13T16:11:32.8986514Z Executing the powershell script: C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\tasks\VSTest\1.0.16\VSTest.ps1 2015-07-13T16:11:33.0727047Z ##[debug]Calling Invoke-VSTest for all test assemblies 2015-07-13T16:11:33.0756512Z Working folder: C:\a\0549426d 2015-07-13T16:11:33.0777083Z Executing C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe "C:\a\0549426d\UnitTestDemo\WebApp.Tests\Scripts\mycode.tests.js" /TestAdapterPath: C:\a\0549426d\UnitTestDemo\Chutzpah /logger:trx 2015-07-13T16:11:34.3495987Z Microsoft (R) Test Execution Command Line Tool Version 12.0.30723.0 2015-07-13T16:11:34.3505995Z Copyright (c) Microsoft Corporation. All rights reserved. 2015-07-13T16:11:34.3896000Z ##[error]Error: The /TestAdapterPath parameter requires a value, which is path of a location containing custom test adapters. Example: /TestAdapterPath:c:\MyCustomAdapters 2015-07-13T16:11:36.5808275Z ##[error]Error: The test source file "C:\a\0549426d\UnitTestDemo\Chutzpah" provided was not found. 2015-07-13T16:11:37.0004574Z ##[error]VSTest Test Run failed with exit code: 1 2015-07-13T16:11:37.0094570Z ##[warning]No results found to publish. I have been doing some work on vNext Release Management; I managed to waste a good hour today with a stupid error. In vNext process templates you provide a username and password to be used as the Powershell remoting credentials (in the red box below) My Powershell script also took a parameter username, so this was provided as a custom configuration too (the green box). This was the issue. Not unsurprisingly having two parameters with the same name is a problem. You might get away with it if they are the same value (I did on one stage, which caused more confusion), but if they differ (as mine did in my production stage) the last one set wins, which meant my remote Powershell returned the error System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. ---> Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException: Permission denied while trying to connect to the target machine Gadila.blackmarble.co.uk on the port:5985 via power shell remoting. Easy to fix once you realise the problem, a logon failure is logged on the target machine in the event log. Just make sure you have unique parameters Update 21 Aug 2015 - This post contains all the basic information, but there is an improved PowerShell script discussed in Using Release Management vNext templates when you don’t want to use DSC scripts – A better script Many web sites are basically forms over data, so you need to deploy some DB schema and something like a MVC website. Even for this ’bread and butter’ work it is important to have an automated process to avoid human error. Hence the rise in use of release tools to run your DACPAC and MSDeploy packages. In the Microsoft space this might lead to the question of how Desired State Configuration (DSC) can help? I, and others, have posted in the past about how DSC can be used to achieve this type of deployment, but this can be complex and you have to ask is DSC the best way to manage DACPAC and MSDeploy packages? Or is DSC better suited to only the configuration of your infrastructure/OS features? You might ask why would you not want to use DSC, well the most common reason I see is that you need to provide deployment script to end clients who don’t use DSC, or you have just decided want basic PowerShell. Only you will be able to judge which is the best for your systems, but I thought it worth outlining an alternative way to do deployment of these package using Release Management vNext pipelines that does not make use of DSC. ## Background Let us assume we have a system with a SQL server and a IIS web server that have been added to the Release Management vNext environment. These already have SQL and IIS enabled, maybe you used DSC for that? The vNext release template allows you to run either DSC or PowerShell on the machines, we will ignore DSC, so what can you do if you want to use simple PowerShell scripts? ## Where do I put my Scripts? We will place the PowerShell scripts (and maybe any tools they call) under source control such that they end up in the build drops location, thus making it easy for Release Management to find them, and allowing the scripts (and tools) to be versioned. ## Deploying a DACPAC The script I have been using to deploy DACPACs is as follows # find the script folder$folder = Split-Path -parent $MyInvocation.MyCommand.DefinitionWrite-Verbose "Deploying DACPAC$SOURCEFILE using script in '$folder'"&$folder\sqlpackage.exe /Action:Publish /SourceFile:$folder\..\$SOURCEFILE /TargetServerName:$TARGETSERVERNAME /TargetDatabaseName:$TARGETDATABASENAME | Write-Verbose -Verbose

Note that:

1. First it finds the folder it is running in, this is the easiest way to find other resource I need
2. The only way any logging will end up in the Release Management logs is if is logged at the verbose level i.e. write-verbose “your message” –verbose
3. I have used a simple & my.exe to execute my command, but pass the output via the write-verbose cmdlet to make sure we see the results. The alternative would be to use invoke-process
4. SQLPACKAGE.EXE (and its associated DLLs) are located in the same SCRIPTS folder as the PowerShell script and are under source control. Of course you could make sure any tools you need are already installed on the target machine.

I pass the three parameters need for the strips as custom configuration

Remember that you don’t have to be the SQL server to run SQLPACKAGE.EXE, it can be run remotely (that is why in the screen shot above the ServerName is ISS IIS8 not SQL as you might expect)

## Deploying a MSDeploy Package

The script I use to deploy the WebDeploy package this is as follows

function Update-ParametersFile{    param    (        $paramFilePath,$paramsToReplace    )
write-verbose "Updating parameters file '$paramFilePath'" -verbose$content = get-content $paramFilePath$paramsToReplace.GetEnumerator() | % {        Write-Verbose "Replacing value for key '$($_.Key)'" -Verbose        $content =$content.Replace($_.Key,$_.Value)    }    set-content -Path $paramFilePath -Value$content
}
# the script folder$folder = Split-Path -parent$MyInvocation.MyCommand.Definitionwrite-verbose "Deploying Website '$package' using script in '$folder'" -verbose
Update-ParametersFile -paramFilePath "$folder\..\_PublishedWebsites\$($package)_Package\$package.SetParameters.xml" -paramsToReplace @{      "__DataContext__" = $datacontext "__SiteName__" =$siteName      "__Domain__" = $Domain "__AdminGroups__" =$AdminGroups}

## Fixing the 500 error

The 500 error was stranger. Turns out the issue was the registration of our TFS server in Release Management.

Using the dialogs in the RM client we has registered our TFS server, this had generated the URL https://tfs.domain.com:443/tfs. If we ran the InitiateReleaseFromBuild.ps1 script with this URL set as a parameter we got the 500 error, the RM logs showed the workflow could not start. Eventually we realised it was because RM thought it could not access the TFS server. So the problem was that at some point  between the script being run and the RM server processing the URL the :443 had been removed; presumably because this is the default for HTTPS and some layer was being ‘helpful’. This meant that the RM server was trying to string match the URL https://tfs.domain.com/tfs against https://tfs.domain.com:443/tfs which failed, hence the workflow failed.

The fix was to edit the TFS registration in RM to remove the port number, leave the field empty (not that obvious as the dialog completes this field for you when you select HTTPS)

Once this was done the URL matching worked and the release pipeline triggered as expected.

Had a strange issue today while editing our standard TFS 2013 XAML build process template to add an optional post drop script block to allow a Release Management pipeline to be triggered via REST. Our standard template includes a block for enabling and disabling Typemock, after editing our template to add the new script block (nowhere near the Typemock section) our builds failed with the error

TF215097: An error occurred while initializing a build for build definition \BM\ISS.Expenses.Main.CI: Exception Message: Cannot set unknown member 'TypeMock.TFS2013.TypeMockStart.DisableAutoLink'. (type XamlObjectWriterException) Exception Stack Trace: at System.Xaml.XamlObjectWriter.WriteStartMember(XamlMember property) 

It took ages to find the issue, we hunted for badly formed XAML, but the issue turned out to be that when ever we opened the template in Visual Studio 2013 it added the highlighted property

<If Condition="[UseTypemock = True]" DisplayName="If using Typemock" sap2010:WorkflowViewState.IdRef="If_8">  <If.Then>   <Sequence DisplayName="Enabling Typemock" sap2010:WorkflowViewState.IdRef="Sequence_16">      <tt:TypeMockRegister AutoDeployDir="[TypemockAutoDeployDir]" Company="[TypemockCompany]" sap2010:WorkflowViewState.IdRef="TypeMockRegister_1" License="[TypemockLicense]" />      <tt:TypeMockStart DisableAutoLink="{x:Null}" EvaluationFolder="{x:Null}" Link="{x:Null}" LogLevel="{x:Null}" LogPath="{x:Null}" ProfilerLaunchedFirst="{x:Null}" Target="{x:Null}" Verbosity="{x:Null}" Version="{x:Null}" AutoDeployDir="[TypemockAutoDeployDir]" sap2010:WorkflowViewState.IdRef="TypeMockStart_1" />     </Sequence>  </If.Then></If>

It should have been

<If Condition="[UseTypemock = True]" DisplayName="If using Typemock" sap2010:WorkflowViewState.IdRef="If_8">  <If.Then>    <Sequence DisplayName="Enabling Typemock" sap2010:WorkflowViewState.IdRef="Sequence_16">       <tt:TypeMockRegister AutoDeployDir="[TypemockAutoDeployDir]" Company="[TypemockCompany]" sap2010:WorkflowViewState.IdRef="TypeMockRegister_1" License="[TypemockLicense]" />       <tt:TypeMockStart EvaluationFolder="{x:Null}" Link="{x:Null}" LogLevel="{x:Null}" LogPath="{x:Null}" ProfilerLaunchedFirst="{x:Null}" Target="{x:Null}" Verbosity="{x:Null}" Version="{x:Null}" AutoDeployDir="[TypemockAutoDeployDir]" sap2010:WorkflowViewState.IdRef="TypeMockStart_1" />    </Sequence>  </If.Then></If>`

All I can assume is that this is due to some assembly mismatch between the Typemock DLLs linked to the XAML build process template and those on my development PC.

The fix for now is to do the editing in a text editor, or at least checking the file to make sure the property has not been edited before it is checked in.