But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Lessons learnt using simple PowerShell scripts with vNext Release Management

If you are using basic PowerShell scripts as opposed to DSC with Release Management there are a few gotcha’s I have found.

You cannot pass parameters

Lets look at a sample script that we would like to run via Release Manager

param
(
    $param1
)

write-verbose -verbose "Start"
write-verbose -verbose "Got var1 [$var1]"
write-verbose -verbose "Got param1 [$param1]"
write-verbose -verbose "End"

In Release Manager we have the following vNext workflow

image

You can see we are setting two custom values which we intend to use within our script, one is a script parameter (Param1), the other one is just a global variable (Var1).

If we do a deployment we get the log

Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\152 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.

Start

Got var1 [XXXvar1]

Got param1 []

End

You can see the problem, $var1 is set, $param1 is not. Took me a while to get my head around this, the problem is the RM activity’s PSSCriptPath is just that a script path, not a command line that will be executed. Unlike the PowerShell activities in the vNext build tools you don’t have a pair of settings, one for the path to the script and another for the arguments. Here we have no ways to set the command line arguments.

Note: The PSConfigurationPath is just for DSC configurations as discussed elsewhere.

So in effect the Param1 is not set, as we did not call

test -param1 “some value”

This means there is no point using parameters in the script you wish to use with RM vNext. But wait, I bet you are thinking ‘I want to run my script externally to Release Manager to test it, and using parameters with validation rules is best practice, I don’t want to loose that advantage

The best workaround I have found is to use a wrapper script that takes the variable and makes them parameters, something like this

$folder = Split-Path -Parent $MyInvocation.MyCommand.Definition
& $folder\test.ps1 -param1 $param1

Another Gotcha Note that I need to find the path the wrapper script is running in and use it to build the path to my actual script. If I don’t do this I get that the test.ps1 script can’t be found.

After altering my pipeline to use the wrapper and rerunning the deployment I get the log file I wanted

Copying recursively from \\store\drops\rm\4583e318-abb2-4f21-9289-9cb0264a3542\160 to C:\Windows\DtlDownloads\ISS vNext Drops succeeded.

Start

Got var1 [XXXvar1]

Got param1 [XXXparam1]

End

 

This is all a bit ugly, but works.

Looking forward this appears to not be too much of an issue. The next version of Release Management as shown at Build is based around the vNext  TFS build tooling which seems to always allow you to pass true PowerShell command line arguments. So this problem should go away in the not too distant future.

Don’t write to the console

The other big problem is any script that writes or reads from the console. Usually this means a write-host call in a script that causes an error along the lines

A command that prompts the user failed because the host program or the command type does not support user interaction. Try a host program that supports user interaction, such as the Windows PowerShell Console or Windows PowerShell ISE, and remove prompt-related commands from command types that do not support user interaction, such as Windows PowerShell workflows.
+At C:\Windows\DtlDownloads\ISS vNext Drops\scripts\test.ps1:7 char:1
+ Write-Host "hello 1" -ForegroundColor red

But also watch out for any CLS calls, that has caught me out. I have found the it can be hard to track down the offending lines, especially if there are PowerShell modules loading modules.

The best recommendation is to just use write-verbose and write-error.

  • write-error if your script has errored. This will let RM know the script has failed, thus failing the deployment – just what we want
  • write-verbose for any logging

Any other form of PowerShell output will not be passed to RM, be warned!

You might also notice in my sample script that I am passing the –verbose argument to the write-verbose command, again you have to have this maximal level of logging on  for the messages to make it out to the RM logs. Probably a better solution, if you think you might vary the level of logging, is to change the script to set the $VerbosePreference

param
(
    $param1
)




$VerbosePreference ='Continue' # equiv to -verbose

write-verbose "Start"
write-verbose "Got var1 [$var1]"
write-verbose "Got param1 [$param1]"
write-verbose "End"

So hopefully a few pointers to make your deployments a bit smoother

Stray white space in a ‘path to custom test adaptors’ will cause tests to fail on VSO vNext build

If you are providing a path to a custom test adaptor such as nUnit or Chutzpah for a TFS/VSO vNext build e.g. $(Build.SourcesDirectory)\packages, make sure you have no leading whitespace in the data entry form.

image

 

If you do have a space you will see an error log like this as the adaptor cannot be found as the command line generated is malformed

2015-07-13T16:11:32.8986514Z Executing the powershell script: C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\tasks\VSTest\1.0.16\VSTest.ps1
2015-07-13T16:11:33.0727047Z ##[debug]Calling Invoke-VSTest for all test assemblies
2015-07-13T16:11:33.0756512Z Working folder: C:\a\0549426d
2015-07-13T16:11:33.0777083Z Executing C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe "C:\a\0549426d\UnitTestDemo\WebApp.Tests\Scripts\mycode.tests.js"  /TestAdapterPath: C:\a\0549426d\UnitTestDemo\Chutzpah /logger:trx
2015-07-13T16:11:34.3495987Z Microsoft (R) Test Execution Command Line Tool Version 12.0.30723.0
2015-07-13T16:11:34.3505995Z Copyright (c) Microsoft Corporation.  All rights reserved.
2015-07-13T16:11:34.3896000Z ##[error]Error: The /TestAdapterPath parameter requires a value, which is path of a location containing custom test adapters. Example:  /TestAdapterPath:c:\MyCustomAdapters
2015-07-13T16:11:36.5808275Z ##[error]Error: The test source file "C:\a\0549426d\UnitTestDemo\Chutzpah" provided was not found.
2015-07-13T16:11:37.0004574Z ##[error]VSTest Test Run failed with exit code: 1
2015-07-13T16:11:37.0094570Z ##[warning]No results found to publish.

    Cannot run Pester unit tests in Visual Studio but they work Ok from the command prompt

    I have been using Pester for some PowerShell tests. From the command prompt all is good, but I kept getting the error ‘module cannot be loaded because scripts is disabled on this system’ when I tried to run them via the Visual Studio Test Explorer

     

    image

    I found the solution on StackOverflow, I had forgotten that Visual Studio is 32bit, so you need to set the 32bit execution policy. Opening the default PowerShell command prompt and and setting the policy only effect the 64Bit instance.

    1. Open C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe
    2. Run the command Set-ExecutionPolicy RemoteSigned
    3. My tests passed (without restarting Visual Studio)

    image

    Overwriting your own parameters in Release Management can cause Powershell remoting problems

    I have been doing some work on vNext Release Management; I managed to waste a good hour today with a stupid error.

    In vNext process templates you provide a username and password to be used as the Powershell remoting credentials (in the red box below)

    image

    My Powershell script also took a parameter username, so this was provided as a custom configuration too (the green box). This was the issue. Not unsurprisingly having two parameters with the same name is a problem. You might get away with it if they are the same value (I did on one stage, which caused more confusion), but if they differ (as mine did in my production stage) the last one set wins, which meant my remote Powershell returned the error

    System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. ---> Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException: Permission denied while trying to connect to the target machine Gadila.blackmarble.co.uk on the port:5985 via power shell remoting.

    Easy to fix once you realise the problem, a logon failure is logged on the target machine in the event log. Just make sure you have unique parameters

    Using Release Management vNext templates when you don’t want to use DSC scripts

    Many web sites are basically forms over data, so you need to deploy some DB schema and something like a MVC website. Even for this ’bread and butter’ work it is important to have an automated process to avoid human error. Hence the rise in use of release tools to run your DACPAC and MSDeploy packages.

    In the Microsoft space this might lead to the question of how Desired State Configuration (DSC) can help? I, and others, have posted in the past about how DSC can be used to achieve this type of deployment, but this can be complex and you have to ask is DSC the best way to manage DACPAC and MSDeploy packages? Or is DSC better suited to only the configuration of your infrastructure/OS features?

    You might ask why would you not want to use DSC, well the most common reason I see is that you need to provide deployment script to end clients who don’t use DSC, or you have just decided want basic PowerShell. Only you will be able to judge which is the best for your systems, but I thought it worth outlining an alternative way to do deployment of these package using Release Management vNext pipelines that does not make use of DSC.

    Background

    Let us assume we have a system with a SQL server and a IIS web server that have been added to the Release Management vNext environment. These already have SQL and IIS enabled, maybe you used DSC for that?

    The vNext release template allows you to run either DSC or PowerShell on the machines, we will ignore DSC, so what can you do if you want to use simple PowerShell scripts?

    Where do I put my Scripts?

    We will place the PowerShell scripts (and maybe any tools they call) under source control such that they end up in the build drops location, thus making it easy for Release Management to find them, and allowing the scripts (and tools) to be versioned.

    Deploying a DACPAC

    The script I have been using to deploy DACPACs is as follows

    # find the script folder
    $folder = Split-Path -parent $MyInvocation.MyCommand.Definition
    Write-Verbose "Deploying DACPAC $SOURCEFILE using script in '$folder'"
    & $folder\sqlpackage.exe /Action:Publish /SourceFile:$folder\..\$SOURCEFILE /TargetServerName:$TARGETSERVERNAME /TargetDatabaseName:$TARGETDATABASENAME | Write-Verbose -Verbose

    Note that:

    1. First it finds the folder it is running in, this is the easiest way to find other resource I need
    2. The only way any logging will end up in the Release Management logs is if is logged at the verbose level i.e. write-verbose “your message” –verbose
    3. I have used a simple & my.exe to execute my command, but pass the output via the write-verbose cmdlet to make sure we see the results. The alternative would be to use invoke-process
    4. SQLPACKAGE.EXE (and its associated DLLs) are located in the same SCRIPTS folder as the PowerShell script and are under source control. Of course you could make sure any tools you need are already installed on the target machine.

    I pass the three parameters need for the strips as custom configuration

    image

    Remember that you don’t have to be the SQL server to run SQLPACKAGE.EXE, it can be run remotely (that is why in the screen shot above the ServerName is ISS IIS8 not SQL as you might expect)

    Deploying a MSDeploy Package

    The script I use to deploy the WebDeploy package this is as follows

    function Update-ParametersFile
    {
        param
        (
            $paramFilePath,
            $paramsToReplace
        )

        write-verbose "Updating parameters file '$paramFilePath'" -verbose
        $content = get-content $paramFilePath
        $paramsToReplace.GetEnumerator() | % {
            Write-Verbose "Replacing value for key '$($_.Key)'" -Verbose
            $content = $content.Replace($_.Key, $_.Value)
        }
        set-content -Path $paramFilePath -Value $content

    }


    # the script folder
    $folder = Split-Path -parent $MyInvocation.MyCommand.Definition
    write-verbose "Deploying Website '$package' using script in '$folder'" -verbose

    Update-ParametersFile -paramFilePath "$folder\..\_PublishedWebsites\$($package)_Package\$package.SetParameters.xml" -paramsToReplace @{
          "__DataContext__" = $datacontext
          "__SiteName__" = $siteName
          "__Domain__" = $Domain
          "__AdminGroups__" = $AdminGroups

    }

    write-verbose "Calling '$package.deploy.cmd'" -verbose
    & "$folder\..\_PublishedWebsites\$($package)_Package\$package.deploy.cmd" /Y | Write-Verbose -verbose

    Note that:

    1. First I declare a function that I use to replace the contents of the package.setparameters.xml file, a key step in using binary promotion and WebDeploy
    2. Again I finds the folder the script is running in so I can locate other resources
    3. I then declare the parameters I need to replace and call the replacement function 
    4. Finally I call the package.deploy.cmd command, and pass the output via the write-verbose to pass the output to the Release Management logs

    This is called as follows

    image

    Summary

    So I think these reusable scripts give a fairly  easy way to make use of  vNext Release Management pipelines. They can also easily be given to clients who just want to manually run something.

    Fix for 500 internal errors when trying to trigger a Release Management pipeline from a build via the REST API

    With the help of the Release Management team at Microsoft I now have a working REST based automated TFS Build to Release Management pipeline. Previously we were using a TFS automated build and then manually triggering our agent based Release Management pipeline. When we moved to a vNext PS/DSC based RM pipeline I took the chance to automate the link using REST via a PowerShell script to trigger the initial deployment. However, I hit problem, first with a stupid 401 permission error and later with a much stranger 500 internal server error.

    Fixing the 401 error

    The first problem was that in the InitiateReleaseFromBuild.ps1 script defaults to a hardcoded username and password. You should really be using the current credentials. To do this make sure the lines around line60 in the script are as shown below (or enter valid credentials if you don’t want to use default credentials)

    $wc = New-Object System.Net.WebClient
    $wc.UseDefaultCredentials = $true
    # rmuser should be part rm users list and he should have permission to trigger the release.
    #$wc.Credentials = new-object System.Net.NetworkCredential("rmuser", "rmuserpassword", "rmuserdomain")

    Fixing the 500 error

    The 500 error was stranger. Turns out the issue was the registration of our TFS server in Release Management.

    Using the dialogs in the RM client we has registered our TFS server, this had generated the URL https://tfs.domain.com:443/tfs. If we ran the InitiateReleaseFromBuild.ps1 script with this URL set as a parameter we got the 500 error, the RM logs showed the workflow could not start. Eventually we realised it was because RM thought it could not access the TFS server. So the problem was that at some point  between the script being run and the RM server processing the URL the :443 had been removed; presumably because this is the default for HTTPS and some layer was being ‘helpful’. This meant that the RM server was trying to string match the URL https://tfs.domain.com/tfs against https://tfs.domain.com:443/tfs which failed, hence the workflow failed.

    The fix was to edit the TFS registration in RM to remove the port number, leave the field empty (not that obvious as the dialog completes this field for you when you select HTTPS)

    image

    Once this was done the URL matching worked and the release pipeline triggered as expected.

    Strange TFS build process template editing issue with Typemock

    Had a strange issue today while editing our standard TFS 2013 XAML build process template to add an optional post drop script block to allow a Release Management pipeline to be triggered via REST. Our standard template includes a block for enabling and disabling Typemock, after editing our template to add the new script block (nowhere near the Typemock section) our builds failed with the error

    TF215097: An error occurred while initializing a build for build definition \BM\ISS.Expenses.Main.CI: Exception Message: Cannot set unknown member 'TypeMock.TFS2013.TypeMockStart.DisableAutoLink'. (type XamlObjectWriterException) Exception Stack Trace: at System.Xaml.XamlObjectWriter.WriteStartMember(XamlMember property) 

    It took ages to find the issue, we hunted for badly formed XAML, but the issue turned out to be that when ever we opened the template in Visual Studio 2013 it added the highlighted property

     

    <If Condition="[UseTypemock = True]" DisplayName="If using Typemock" sap2010:WorkflowViewState.IdRef="If_8">
      <If.Then>
       <Sequence DisplayName="Enabling Typemock" sap2010:WorkflowViewState.IdRef="Sequence_16">
          <tt:TypeMockRegister AutoDeployDir="[TypemockAutoDeployDir]" Company="[TypemockCompany]" sap2010:WorkflowViewState.IdRef="TypeMockRegister_1" License="[TypemockLicense]" />
          <tt:TypeMockStart DisableAutoLink="{x:Null}" EvaluationFolder="{x:Null}" Link="{x:Null}" LogLevel="{x:Null}" LogPath="{x:Null}" ProfilerLaunchedFirst="{x:Null}" Target="{x:Null}" Verbosity="{x:Null}" Version="{x:Null}" AutoDeployDir="[TypemockAutoDeployDir]" sap2010:WorkflowViewState.IdRef="TypeMockStart_1" />
         </Sequence>
      </If.Then>
    </If>

    It should have been

    <If Condition="[UseTypemock = True]" DisplayName="If using Typemock" sap2010:WorkflowViewState.IdRef="If_8">
      <If.Then>
        <Sequence DisplayName="Enabling Typemock" sap2010:WorkflowViewState.IdRef="Sequence_16">
           <tt:TypeMockRegister AutoDeployDir="[TypemockAutoDeployDir]" Company="[TypemockCompany]" sap2010:WorkflowViewState.IdRef="TypeMockRegister_1" License="[TypemockLicense]" />
           <tt:TypeMockStart EvaluationFolder="{x:Null}" Link="{x:Null}" LogLevel="{x:Null}" LogPath="{x:Null}" ProfilerLaunchedFirst="{x:Null}" Target="{x:Null}" Verbosity="{x:Null}" Version="{x:Null}" AutoDeployDir="[TypemockAutoDeployDir]" sap2010:WorkflowViewState.IdRef="TypeMockStart_1" />
        </Sequence>
      </If.Then>
    </If>

    All I can assume is that this is due to some assembly mismatch between the Typemock DLLs linked to the XAML build process template and those on my development PC.

    The fix for now is to do the editing in a text editor, or at least checking the file to make sure the property has not been edited before it is checked in.

    MSDeploy Parameters.xml can only replace web.config values is a value is already set

    If you are using a parameters.xml file to set value with MSDeploy I have just found a gotcha. You need some value in the web.config file, not just an empty XML tag, else the replacement fails. So to explain…

    I had the following parameters.xml file, and use Release Management to replace the __TAG__ values at deployment time.

    <parameters>
      <parameter name="Domain" description="Please enter the name of the domain" defaultvalue="__Domain__" tags="">
        <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/applicationSettings/Web.Properties.Settings/setting[@name='Domain']/value/text()" />
      </parameter>

      <parameter name="AdminGroups" description="Please enter the name of the admin group" defaultvalue="__AdminGroups__" tags="">
        <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/applicationSettings/Web.Properties.Settings/setting[@name='AdminGroups']/value/text()" />
      </parameter>
    </parameters>

    If my web.config file (in the MSDeploy package to be transformed) was set to

    <applicationSettings>
        <Web.Properties.Settings>
          <setting name="Domain" serializeAs="String">
            <value>Blackmarble</value>
          </setting>
          <setting name="AdminGroups" serializeAs="String">
            <value />
          </setting>
        </BlackMarble.ISS.Expenses.Web.Properties.Settings>
      </applicationSettings>

    or

    <applicationSettings>
        <Web.Properties.Settings>
          <setting name="Domain" serializeAs="String">
            <value>Blackmarble</value>
          </setting>
          <setting name="AdminGroups" serializeAs="String">
            <value></value>
          </setting>
        </BlackMarble.ISS.Expenses.Web.Properties.Settings>
      </applicationSettings>

    only the Domain setting was set.

    To get both set I had to have a value for each property, even though they were being reset at deployment.

    <applicationSettings>
        <Web.Properties.Settings>
          <setting name="Domain" serializeAs="String">
            <value>DummyDomain</value>
          </setting>
          <setting name="AdminGroups" serializeAs="String">
            <value>DummyAdmins</value>
          </setting>
        </BlackMarble.ISS.Expenses.Web.Properties.Settings>
      </applicationSettings>

    Never seen that one before.

    Cross platform build with TFS 2015 vNext Build

    I have been preparing for my Techorama session on TFS vNext build. One of the demo’s I am planning is to use the Node based cross platform build agent to build something on a Linux VM. Turns out this takes a few undocumented steps to get this going with the CTP of TFS 2015

    The process I followed was:

    • I installed a Mint 17 VM
    • On the VM, I installed the Node VSOAgent as detailed in the npm documentation (or I could have built it from from source from GitHub to get the bleeding edge version)
    • I created a new agent instance
               vsoagent-installer
    • I then tried to run the configuration, but hit a couple of issues
                node vsoagent

    URL error

    The first problem was I was told the URL I provided was invalid. I had tried the URL of my local TFS 2015 CTP VM

    http://typhoontfs:8080/tfs

    The issue is that the vsoagent was initially developed for VSO and is expecting a fully qualified URL. To get around this, as I was on a local test network, I just added an entry to my Linux OS’s local /etc/hosts file, so I could call

    http://typhoontfs.local:8080/tfs

    This URL was accepted

    401 Permissions Error

    Once the URL was accepted, the next problem was I got a 401 permission error.

    Now the release notes make it clear that you have to enable alternate credentials on your VSO account, but this is not a option for on premises TFS.

    The solution is easy though (at least for a trial system). In IIS Manager on your TFS server enable basic authentication for the TFS application, you are warned this is not secure as passwords are sent in clear text, so probably not something to do on a production system

    image

    Once this was set the configuration of the client worked and I had an vsoagent running on my Linux client.

    I could then go into the web based TFS Build.vNext interface and create a new empty build, adding the build tool I required, in my case Ant, using an Ant script stored with my project code in my TFS based Git repo.

    When I ran the build it errored, as expected, my  Linux VM was missing all the build tools, but this was fixed by running apt-get on my Linux VM to install ant, ant-optional and the Java JDK. Obviously you need to install the tools you need.

    So I have working demo, my Java application builds and resultant files dropped back into TFS. OK the configuration is not perfect at present, but from the GitHub site you can see the client  is being rapidly iterated

    Build arguments are not returned for a build definition via the TFS API if they are left as default values

    We use my TFS Alerts DSL to perform tasks when our TFS build complete, one of these is a job to increment the minor version number and reset the version start date (the value that generates third field – days since a point in time) if a build is set to the quality ‘release’ e.g. 1.2.99.[unique build id] where 99 is the days count since some past date could change to 1.3.0.[unique build id] (see this old post on how we do this in the build process)

    I have just found a bug (feature?) in the way the DSL does this; turns out if you did not set the major and minor version argument values in the build editor (you just left them to their default values of 1 and 0) then the DSL fails as defaulted argument are not returned in the property set of the build definiation we process in the DSL. You would expect to get a 0 back, but you in fact get a null.

    So if you have a build where you expect the version to increment and it does not, check the build definition and make sure the MajorVersion, MinorVersion (or whatever you called them) and version start date are all in bold

     

    clip_image002

    I have updated the code on Codeplex so that it gives a better error message in the event log if problem occurs with a build.