# But it works on my PC!

### The random thoughts of Richard Fennell on technology and software development

If you are using basic PowerShell scripts as opposed to DSC with Release Management there are a few gotcha’s I have found.

## You cannot pass parameters

Lets look at a sample script that we would like to run via Release Manager

param(    $param1 ) write-verbose -verbose "Start"write-verbose -verbose "Got var1 [$var1]"write-verbose -verbose "Got param1 [$param1]"write-verbose -verbose "End" In Release Manager we have the following vNext workflow You can see we are setting two custom values which we intend to use within our script, one is a script parameter (Param1), the other one is just a global variable (Var1). If we do a deployment we get the log Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\152 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.StartGot var1 [XXXvar1]Got param1 []End You can see the problem,$var1 is set, $param1 is not. Took me a while to get my head around this, the problem is the RM activity’s PSSCriptPath is just that a script path, not a command line that will be executed. Unlike the PowerShell activities in the vNext build tools you don’t have a pair of settings, one for the path to the script and another for the arguments. Here we have no ways to set the command line arguments. Note: The PSConfigurationPath is just for DSC configurations as discussed elsewhere. So in effect the Param1 is not set, as we did not call test -param1 “some value” This means there is no point using parameters in the script you wish to use with RM vNext. But wait, I bet you are thinking ‘I want to run my script externally to Release Manager to test it, and using parameters with validation rules is best practice, I don’t want to loose that advantage The best workaround I have found is to use a wrapper script that takes the variable and makes them parameters, something like this $folder = Split-Path -Parent $MyInvocation.MyCommand.Definition&$folder\test.ps1 -param1 $param1 Another Gotcha Note that I need to find the path the wrapper script is running in and use it to build the path to my actual script. If I don’t do this I get that the test.ps1 script can’t be found. After altering my pipeline to use the wrapper and rerunning the deployment I get the log file I wanted Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\160 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.StartGot var1 [XXXvar1]Got param1 [XXXparam1]End  This is all a bit ugly, but works. Looking forward this appears to not be too much of an issue. The next version of Release Management as shown at Build is based around the vNext TFS build tooling which seems to always allow you to pass true PowerShell command line arguments. So this problem should go away in the not too distant future. ## Don’t write to the console The other big problem is any script that writes or reads from the console. Usually this means a write-host call in a script that causes an error along the lines A command that prompts the user failed because the host program or the command type does not support user interaction. Try a host program that supports user interaction, such as the Windows PowerShell Console or Windows PowerShell ISE, and remove prompt-related commands from command types that do not support user interaction, such as Windows PowerShell workflows. +At C:\Windows\DtlDownloads\ISS vNext Drops\scripts\test.ps1:7 char:1+ Write-Host "hello 1" -ForegroundColor red But also watch out for any CLS calls, that has caught me out. I have found the it can be hard to track down the offending lines, especially if there are PowerShell modules loading modules. The best recommendation is to just use write-verbose and write-error. • write-error if your script has errored. This will let RM know the script has failed, thus failing the deployment – just what we want • write-verbose for any logging Any other form of PowerShell output will not be passed to RM, be warned! You might also notice in my sample script that I am passing the –verbose argument to the write-verbose command, again you have to have this maximal level of logging on for the messages to make it out to the RM logs. Probably a better solution, if you think you might vary the level of logging, is to change the script to set the$VerbosePreference

param(    $param1 )$VerbosePreference ='Continue' # equiv to -verbose
write-verbose "Start"write-verbose "Got var1 [$var1]"write-verbose "Got param1 [$param1]"write-verbose "End"

So hopefully a few pointers to make your deployments a bit smoother

If you are providing a path to a custom test adaptor such as nUnit or Chutzpah for a TFS/VSO vNext build e.g. $(Build.SourcesDirectory)\packages, make sure you have no leading whitespace in the data entry form. If you do have a space you will see an error log like this as the adaptor cannot be found as the command line generated is malformed 2015-07-13T16:11:32.8986514Z Executing the powershell script: C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\tasks\VSTest\1.0.16\VSTest.ps1 2015-07-13T16:11:33.0727047Z ##[debug]Calling Invoke-VSTest for all test assemblies 2015-07-13T16:11:33.0756512Z Working folder: C:\a\0549426d 2015-07-13T16:11:33.0777083Z Executing C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe "C:\a\0549426d\UnitTestDemo\WebApp.Tests\Scripts\mycode.tests.js" /TestAdapterPath: C:\a\0549426d\UnitTestDemo\Chutzpah /logger:trx 2015-07-13T16:11:34.3495987Z Microsoft (R) Test Execution Command Line Tool Version 12.0.30723.0 2015-07-13T16:11:34.3505995Z Copyright (c) Microsoft Corporation. All rights reserved. 2015-07-13T16:11:34.3896000Z ##[error]Error: The /TestAdapterPath parameter requires a value, which is path of a location containing custom test adapters. Example: /TestAdapterPath:c:\MyCustomAdapters 2015-07-13T16:11:36.5808275Z ##[error]Error: The test source file "C:\a\0549426d\UnitTestDemo\Chutzpah" provided was not found. 2015-07-13T16:11:37.0004574Z ##[error]VSTest Test Run failed with exit code: 1 2015-07-13T16:11:37.0094570Z ##[warning]No results found to publish. I have been using Pester for some PowerShell tests. From the command prompt all is good, but I kept getting the error ‘module cannot be loaded because scripts is disabled on this system’ when I tried to run them via the Visual Studio Test Explorer I found the solution on StackOverflow, I had forgotten that Visual Studio is 32bit, so you need to set the 32bit execution policy. Opening the default PowerShell command prompt and and setting the policy only effect the 64Bit instance. 1. Open C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe 2. Run the command Set-ExecutionPolicy RemoteSigned 3. My tests passed (without restarting Visual Studio) I have been doing some work on vNext Release Management; I managed to waste a good hour today with a stupid error. In vNext process templates you provide a username and password to be used as the Powershell remoting credentials (in the red box below) My Powershell script also took a parameter username, so this was provided as a custom configuration too (the green box). This was the issue. Not unsurprisingly having two parameters with the same name is a problem. You might get away with it if they are the same value (I did on one stage, which caused more confusion), but if they differ (as mine did in my production stage) the last one set wins, which meant my remote Powershell returned the error System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. ---> Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException: Permission denied while trying to connect to the target machine Gadila.blackmarble.co.uk on the port:5985 via power shell remoting. Easy to fix once you realise the problem, a logon failure is logged on the target machine in the event log. Just make sure you have unique parameters Many web sites are basically forms over data, so you need to deploy some DB schema and something like a MVC website. Even for this ’bread and butter’ work it is important to have an automated process to avoid human error. Hence the rise in use of release tools to run your DACPAC and MSDeploy packages. In the Microsoft space this might lead to the question of how Desired State Configuration (DSC) can help? I, and others, have posted in the past about how DSC can be used to achieve this type of deployment, but this can be complex and you have to ask is DSC the best way to manage DACPAC and MSDeploy packages? Or is DSC better suited to only the configuration of your infrastructure/OS features? You might ask why would you not want to use DSC, well the most common reason I see is that you need to provide deployment script to end clients who don’t use DSC, or you have just decided want basic PowerShell. Only you will be able to judge which is the best for your systems, but I thought it worth outlining an alternative way to do deployment of these package using Release Management vNext pipelines that does not make use of DSC. ## Background Let us assume we have a system with a SQL server and a IIS web server that have been added to the Release Management vNext environment. These already have SQL and IIS enabled, maybe you used DSC for that? The vNext release template allows you to run either DSC or PowerShell on the machines, we will ignore DSC, so what can you do if you want to use simple PowerShell scripts? ## Where do I put my Scripts? We will place the PowerShell scripts (and maybe any tools they call) under source control such that they end up in the build drops location, thus making it easy for Release Management to find them, and allowing the scripts (and tools) to be versioned. ## Deploying a DACPAC The script I have been using to deploy DACPACs is as follows # find the script folder$folder = Split-Path -parent $MyInvocation.MyCommand.DefinitionWrite-Verbose "Deploying DACPAC$SOURCEFILE using script in '$folder'"&$folder\sqlpackage.exe /Action:Publish /SourceFile:$folder\..\$SOURCEFILE /TargetServerName:$TARGETSERVERNAME /TargetDatabaseName:$TARGETDATABASENAME | Write-Verbose -Verbose

Note that:

1. First it finds the folder it is running in, this is the easiest way to find other resource I need
2. The only way any logging will end up in the Release Management logs is if is logged at the verbose level i.e. write-verbose “your message” –verbose
3. I have used a simple & my.exe to execute my command, but pass the output via the write-verbose cmdlet to make sure we see the results. The alternative would be to use invoke-process
4. SQLPACKAGE.EXE (and its associated DLLs) are located in the same SCRIPTS folder as the PowerShell script and are under source control. Of course you could make sure any tools you need are already installed on the target machine.

I pass the three parameters need for the strips as custom configuration

Remember that you don’t have to be the SQL server to run SQLPACKAGE.EXE, it can be run remotely (that is why in the screen shot above the ServerName is ISS IIS8 not SQL as you might expect)

## Deploying a MSDeploy Package

The script I use to deploy the WebDeploy package this is as follows

function Update-ParametersFile{    param    (        $paramFilePath,$paramsToReplace    )
write-verbose "Updating parameters file '$paramFilePath'" -verbose$content = get-content $paramFilePath$paramsToReplace.GetEnumerator() | % {        Write-Verbose "Replacing value for key '$($_.Key)'" -Verbose        $content =$content.Replace($_.Key,$_.Value)    }    set-content -Path $paramFilePath -Value$content
}
# the script folder$folder = Split-Path -parent$MyInvocation.MyCommand.Definitionwrite-verbose "Deploying Website '$package' using script in '$folder'" -verbose
Update-ParametersFile -paramFilePath "$folder\..\_PublishedWebsites\$($package)_Package\$package.SetParameters.xml" -paramsToReplace @{      "__DataContext__" = $datacontext "__SiteName__" =$siteName      "__Domain__" = $Domain "__AdminGroups__" =$AdminGroups}
$scanner.Scan() write-host ("Succeeded [{0}]" -f$scanner.Succeeded)write-host ("Violation count [{0}]" -f $scanner.ViolationCount) See the GitHub site’s WIKI for the usage details. ### Step 3 – Create a vNext build PowerShell script So now we have the basic tools we need to run StyleCop from a TFS vNext build, but we do need a more complex script. The script you use is up to you, mine looks for .csproj files and runs StyleCop recursively from the directories containing the .csproj files. This means I can have a different setting.stylecop file for each project. In general I have more strict rules on production code than unit test e.g. for unit tests I am not bother about the XML method documentation, but for production code I make sure they are present and match the method parameters. Note: As the script just uses parameters and environment variable it is easy to test outside TFS build, a great improvement over the old build system ## Script to allow StyleCop to be run as part of the TFS vNext build#[CmdletBinding()]param( # We have to pass this boolean flag as string, we cast it before we use it # have to use 0 or 1, true or false [string]$TreatStyleCopViolationsErrorsAsWarnings = 'False')
# local test values, should be commented out in production#$Env:BUILD_STAGINGDIRECTORY = "C:\drops"#$Env:BUILD_SOURCESDIRECTORY = "C:\code\MySolution"
if(-not ($Env:BUILD_SOURCESDIRECTORY -and$Env:BUILD_STAGINGDIRECTORY)){    Write-Error "You must set the following environment variables"    Write-Error "to test this script interactively."    Write-Host '$Env:BUILD_SOURCESDIRECTORY - For example, enter something like:' Write-Host '$Env:BUILD_SOURCESDIRECTORY = "C:\code\MySolution"'    Write-Host '$Env:BUILD_STAGINGDIRECTORY - For example, enter something like:' Write-Host '$Env:BUILD_STAGINGDIRECTORY = "C:\drops"'    exit 1}
# pickup the build locations from the environment$stagingfolder =$Env:BUILD_STAGINGDIRECTORY$sourcefolder =$Env:BUILD_SOURCESDIRECTORY
# have to convert the string flag to a boolean$treatViolationsErrorsAsWarnings = [System.Convert]::ToBoolean($TreatStyleCopViolationsErrorsAsWarnings)
Write-Host ("Source folder ($Env) [{0}]" -f$sourcefolder) -ForegroundColor GreenWrite-Host ("Staging folder ($Env) [{0}]" -f$stagingfolder) -ForegroundColor GreenWrite-Host ("Treat violations as warnings (Param) [{0}]" -f $treatViolationsErrorsAsWarnings) -ForegroundColor Green # the overall results across all sub scans$overallSuccess = $true$projectsScanned = 0$totalViolations = 0 # load the StyleCop classes, this assumes that the StyleCop.DLL, StyleCop.Csharp.DLL,# StyleCop.Csharp.rules.DLL in the same folder as the StyleCopWrapper.dllAdd-Type -Path "StyleCop\StyleCopWrapper.dll"$scanner = new-object StyleCopWrapper.Wrapper
# Set the common scan options, $scanner.MaximumViolationCount = 1000$scanner.ShowOutput = $true$scanner.CacheResults = $false$scanner.ForceFullAnalysis = $true$scanner.AdditionalAddInPaths = @($pwd) # in in local path as we place stylecop.csharp.rules.dll here$scanner.TreatViolationsErrorsAsWarnings = $treatViolationsErrorsAsWarnings # look for .csproj filesforeach ($projfile in Get-ChildItem $sourcefolder -Filter *.csproj -Recurse){ write-host ("Processing the folder [{0}]" -f$projfile.Directory)
# find a set of rules closest to the .csproj file   $settings = Join-Path -path$projfile.Directory -childpath "settings.stylecop"   if (Test-Path $settings) { write-host "Using found settings.stylecop file same folder as .csproj file"$scanner.SettingsFile = $settings } else {$settings = Join-Path -path $sourcefolder -childpath "settings.stylecop" if (Test-Path$settings)       {            write-host "Using settings.stylecop file in solution folder"            $scanner.SettingsFile =$settings       } else        {            write-host "Cannot find a local settings.stylecop file, using default rules"            $scanner.SettingsFile = "." # we have to pass something as this is a required param } }$scanner.SourceFiles =  @($projfile.Directory)$scanner.XmlOutputFile = (join-path $stagingfolder$projfile.BaseName) +".stylecop.xml"   $scanner.LogFile = (join-path$stagingfolder $projfile.BaseName) +".stylecop.log" # Do the scan$scanner.Scan()
# Display the results    Write-Host ("n")    write-host ("Base foldert[{0}]" -f $projfile.Directory) -ForegroundColor Green write-host ("Settings t[{0}]" -f$scanner.SettingsFile) -ForegroundColor Green    write-host ("Succeeded t[{0}]" -f $scanner.Succeeded) -ForegroundColor Green write-host ("Violations t[{0}]" -f$scanner.ViolationCount) -ForegroundColor Green    Write-Host ("Log file t[{0}]" -f $scanner.LogFile) -ForegroundColor Green Write-Host ("XML resultst[{0}]" -f$scanner.XmlOutputFile) -ForegroundColor Green
$totalViolations +=$scanner.ViolationCount    $projectsScanned ++ if ($scanner.Succeeded -eq $false) { # any failure fails the whole run$overallSuccess = $false } } # the output summaryWrite-Host ("n")if ($overallSuccess -eq $false){ Write-Error ("StyleCop found [{0}] violations across [{1}] projects" -f$totalViolations, $projectsScanned)} elseif ($totalViolations -gt 0 -and $treatViolationsErrorsAsWarnings -eq$true){    Write-Warning ("StyleCop found [{0}] violations warnings across [{1}] projects" -f $totalViolations,$projectsScanned)} else{   Write-Host ("StyleCop found [{0}] violations warnings across [{1}] projects" -f $totalViolations,$projectsScanned) -ForegroundColor Green}

### Step 4 – Adding a the script to the repo

To use the script it needs (and any associated files) to be placed in your source control. In my case it meant I create a folder called StyleCop off the root of my TFS 2015 CTP’s Git repo and in it placed the following files

• PowerShell.ps1 – my script file
• StyleCop.dll – the main StyleCop assembly taken from c:\program files (x86)\StyleCop 4.7. By placing it here it means we don’t need to actually install StyleCop on the build machine
• StyleCop.csharp.dll – also from c:\program files (x86)\StyleCop 4.7
• StyleCop.csharp.rules.dll – also from c:\program files (x86)\StyleCop 4.7
• StyleCopWrapper.dll – the wrapper assembly from my GitHub site

### Step 5 – Adding the script to a build process

Once the script is in the repo adding a new step to a vNext build is easy.

• In a browser select the Build.vNext menu options
• The build explorer will be show, right click on the build you wish to add a step to and select edit
• Press the ‘Add build step’ button. The list of steps will be show, pick PowerShell

• As the script is in the repo we can reference it in the new step. in my case I set the script file name to

StyleCop/PowerShell.ps1
• My script takes one parameter, if we should treat StleCop violations as warnings, this is set as the script argument. Note I am using a build variable $(ViolationsAsWarnings) set to a string value ‘True’ or ‘False’, so I have one setting for the whole build script. Though a boolean parameter would be nice it seems I can only pass in strings as build variables, so I do the conversion to a boolean inside the script. -TreatStyleCopViolationsErrorsAsWarnings$(ViolationsAsWarnings)

### Step 6 - Running the build

My test solution has two projects, with different settings.stylecop files. Once the new step was added to my build I could queue a build, by altering \$(ViolationsAsWarnings)  variable I could make the build pass for fail.

The detailed StyleCop result are available in the build log and are also placed in the drops folder in an XML format.

Note: One strange behaviour is that when you test the script outside TFS build you get a .XML and .LOG file for each project scanned. In TFS build you only see the .XML file in the drops folder, this I think is because the .LOG has been redirected into the main TFS vNext build logs.

### Summary

So now I have a way to run StyleCop within a TFS vNext build.

Using these techniques there are no end of tools that can be wired into the build process, and I must say it is far easier than the TFS 2010, 2012, 2013 style workflow customisation.

I have been preparing for my Techorama session on TFS vNext build. One of the demo’s I am planning is to use the Node based cross platform build agent to build something on a Linux VM. Turns out this takes a few undocumented steps to get this going with the CTP of TFS 2015

The process I followed was:

• I installed a Mint 17 VM
• On the VM, I installed the Node VSOAgent as detailed in the npm documentation (or I could have built it from from source from GitHub to get the bleeding edge version)
• I created a new agent instance
vsoagent-installer
• I then tried to run the configuration, but hit a couple of issues
node vsoagent

### URL error

The first problem was I was told the URL I provided was invalid. I had tried the URL of my local TFS 2015 CTP VM

http://typhoontfs:8080/tfs

The issue is that the vsoagent was initially developed for VSO and is expecting a fully qualified URL. To get around this, as I was on a local test network, I just added an entry to my Linux OS’s local /etc/hosts file, so I could call

http://typhoontfs.local:8080/tfs

This URL was accepted

### 401 Permissions Error

Once the URL was accepted, the next problem was I got a 401 permission error.

Now the release notes make it clear that you have to enable alternate credentials on your VSO account, but this is not a option for on premises TFS.

The solution is easy though (at least for a trial system). In IIS Manager on your TFS server enable basic authentication for the TFS application, you are warned this is not secure as passwords are sent in clear text, so probably not something to do on a production system

Once this was set the configuration of the client worked and I had an vsoagent running on my Linux client.

I could then go into the web based TFS Build.vNext interface and create a new empty build, adding the build tool I required, in my case Ant, using an Ant script stored with my project code in my TFS based Git repo.

When I ran the build it errored, as expected, my  Linux VM was missing all the build tools, but this was fixed by running apt-get on my Linux VM to install ant, ant-optional and the Java JDK. Obviously you need to install the tools you need.

So I have working demo, my Java application builds and resultant files dropped back into TFS. OK the configuration is not perfect at present, but from the GitHub site you can see the client  is being rapidly iterated