But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Running Test Suites within a network Isolated Lab Management environment when using TFS vNext build and release tooling

Updated 27 Sep 2016: Added solutions to known issues

Background

As I have posted many times we make use of TFS Lab Management to provide network isolated dev/test environments. Going forward I see us moving to Azure Dev Labs and/or Azure Stack with ARM templates, but that isn’t going to help me today, especially when I have already made the investment in setting up a Lab Management environments and they are ready to use.

One change we are making now is a move from the old TFS Release Management (2013 generation) to the new VSTS and TFS 2015.2 vNext Release tools. This means I need to be able to trigger automated tests on VMs within Lab Management network isolated environments with a command inside my new build/release process. I have posted on how to do this with the older generation Release Management tools, turns out it is in some ways a little simpler with the newer tooling, no need to fiddle with shadow accounts etal.

My Setup

image

Constraints

The constraints are these

  • I need to be able to trigger tests on the Client VM in the network isolated lab environment. These tests are all defined in automated test suites within Microsoft Test Manager.
  • The network isolated lab already has a TFS Test Agent deployed on all the VMs in the environment linked back to the TFS Test Controller on my corporate domain, these agents are automatically installed and managed, and are handling the ‘magic’ for the network isolation – we can’t fiddle with these without breaking the Labs 
  • The new build/release tools assume that you will auto deploy a 2015 generation Test Agent via a build task as part of the build/release process. This is a new test agent install, so removed any already installed Test Agent – we don’t want this as it breaks the existing agent/network isolation.
  • So my only options to trigger the tests by using TCM (as we did in the past) from some machine in the system. In the past (with the old tools) this had to be within the isolated network environment due to the limitation put in place by the use of shadow accounts.  
  • However, TCM (as shipped with VS 2015) does not ‘understand’ vNext builds, so it can’t seem to find them by definition name/number – we have to find builds by their drop location, and I think this needs to be a UNC share, not a drop back onto the TFS server. So using TCM.EXE (and any wrapper scripts) probably is not going to deliver what I want i.e. the test run associated with a vNext build and/or release.
  • My Solution

    The solution I adopted was to write a PowerShell script that performs the same function as the TCMEXEC.PS1 script that used to be run within the network isolated Labe Environment by the older Release Management products.

    The difference is the old script shelled out to run TCM.EXE, my new version makes calls to the new TFS REST API (and unfortunately also to the older C# API as some features notably those for Lab Management services are not exposed via REST). This script can be run from anywhere, I chose to run it on the TFS vNext build agent, as this is easiest and this machine already had Visual Studio installed so had the TFS C# API available.

    You can find this script on my VSTSPowerShell GitHub Repo.

    The usage of the script is

    TCMReplacement.ps1
          -Collectionuri http://tfsserver.domain.com:8080/tfs/defaultcollection/
    -Teamproject "My Project"
    -testplanname "My test plan" 
    -testsuitename "Automated tests"
    -configurationname "Windows 8"
    -buildid  12345
       -environmentName "Lab V.2.0" 
    -testsettingsname "Test Setting"
    -testrunname "Smoke Tests"
    -testcontroller "mytestcontroller.domain.com"
    -releaseUri "vstfs:///ReleaseManagement/Release/167"
    -releaseenvironmenturi "vstfs:///ReleaseManagement/Environment/247"

    Note

  • The last two parameters are optional, all the others are required. If the last two are not used the test results will not be associated with a release
  • The is also a pollinginterval parameter which default to 10 seconds. The script starts a test run then polls on this interval to see if it has completed.
  • If there are any failed test then the script writes to write-error as the TFS build process sees this is a failed step
  • In some ways I think this script is an improvement over the TCMEXEC script, the old one needed you to know the IDs for many of the settings (loads of poking around in Microsoft Test Manager to find them), I allow the common names of settings to be passed in which I then use to lookup the required values via the APIs (this is where I needed to use the older C# API as I could not find a way to get the Configuration ID, Environment ID or Test Settings ID via REST).

    There is nothing stopping you running this script from the command line, but I think it is more likely to make it part of release pipeline using the PowerShell on local machine task in the build system. When used this way you can get many of the parameters from environment variables. So the command arguments become something like the following (and of course you can make all the string values build variables too if you want)

     

       -Collectionuri $(SYSTEM.TEAMFOUNDATIONCOLLECTIONURI) 
    -Teamproject $(SYSTEM.TEAMPROJECT)
    -testplanname "My test plan"
       -testsuitename "Automated tests"
    -configurationname "Windows 8"
    -buildid  $(BUILD.BUILDID)
      -environmentName "Lab V.2.0"
       -testsettingsname "Test Settings"
    -testrunname "Smoke Tests"
    -testcontroller "mytestcontroller.domain.com"
    -releaseUri $(RELEASE.RELEASEURI)
    -releaseenvironmenturi $(RELEASE.ENVIRONMENTURI)

     

    Obviously this script is potentially a good candidate for a TFS build/release task, but as per my usual practice I will make sure I am happy with it’s operation before wrappering it up into an extension.

    Known Issues

  • If you run the script from the command line targeting a completed build and release the tests run and are shown in the release report as well as on the test tab as we would expect.

    image

    However, if you trigger the test run from within a release pipeline, the test runs OK and you can see the results in the test tab (and MTM), but they are not associated within the release. My guess is because the release had not completed when the data update is made. I am investigating this to try to address the issue.
  • Previously I reported there was a known issue that the test results were associated with the build, but not the release. It turns out this was due to the AD account the build/release agent was running as was missing rights on the TFS server. To fix the problem I made sure the account as configured as follows”":

    Once this was done all the test results appeared where they should

    So hopefully you will find this a useful tool if you are using network isolated environments and TFS build

    If I add a custom field to a VSTS work item type what is it’s name?

    The process customisation options in VSTS are now fairly extensive. You can add fields, states and custom items, making VSTS is ‘very possible’ option for many more people.

    As well as the obvious uses of this customisation such as storing more data or matching your required process, customisation can also aid in migrating work items into VSTS from other VSTS instances, or on-premises TFS.

    Whether using TFS Integration (now with no support – beware) or Martin Hinshelwood’s vsts-data-bulk-editor (an active open source solution so probably a much better choice for most people) as mentioned in my past post you need to add a custom field on the target VSTS server to contain the original work item ID. Commonly called ReflectedWorkItemId

    This can be added in VSTS  add detailed in MSDN

     

    image

    Note: In the case of Martin’s tool the field needs to be a string as it is going to contains a URL not the simple Integer you might expect.

    The small issue you have when you add a custom field is that this UI does not make it clear what the full name of field is. You need to remember that it is in the form <name of custom process>.<field name> e.g.  MigrateScrum.ReflectedWorkItemId.

    If you forget this you can always download the work item definition using the TFS Power Tools to have a look (yes this even works on VSTS).

    image

    Fix for my Docker image create dates being 8 hours in the past

    I have been having a look at Docker for Windows recently. I have been experiencing a problem that when I create a new image the created date/time (as shown with docker images) is 8 hours in the past.

    image

    Turns out the problem seems to be due to putting my Windows 10 laptop into sleep mode. So the process to see the problem is

    1. Create a new Docker image – the create date is correct, the current time
    2. Sleep the PC
    3. Wake up the PC
    4. Check the create date, it is now 8 hours off in the past

    Now the create date is not an issue in itself, but the fact that the time within the Docker images is also off by 8 hours can be, especially when trying to connect to cloud based services. I needed to sort it out

    Turns out the fix is simple, you need to stop and restart the Docker process (or restarting the PC has the same effect as this restarts the Docker process). Why the Docker process ends up 8 hours off, irrespective of the time the PC is slept, I don’t know. Just happy to have a quick fix.

    Experiences versioning related sets of NuGet packages within a VSTS build

    Background

    We are currently packaging up a set of UX libraries as NuGet packages to go on our internal NuGet server. The assemblies that make up the core of this framework are all in a single Visual Studio solution, however it makes sense to distribute them as a set of NuGet packages as you might not need all the parts in a given project. Hence we have a package structure as follows…

    • BM.UX.Common
    • BM.UX.Controls
    • BM.UX.Behaviours
    • etc…

    There has been much thought on the versioning strategy of these packages. We did consider independent versioning of each of these fundamental packages, but decided it was worth the effort, keeping their versions in sync was reasonable  i.e. the packages have the same version number and are released as a set.

    Now this might not be the case for future ‘extension’ packages, but it is an OK assumption for now, especially as it makes the development cycle quicker/easier. This framework is young and rapidly changing, there are often changes in a control that needs associated changes in the common assembly; it is hence good that a developers does not have to check-in a change on the common package before they can make an associated changed to the control package whist debugging a control prior to it being released.

    However, this all meant it was important to make sure the package dependencies and versions are set correctly.

    Builds

    We are using Git for this project (though this process is just as relevant for TFVC) with a development branch and a master branch. Each branch has its own CI triggered build

    • Development branch build …
      • Builds the solution
      • Runs Unit tests
      • Does SonarQube analysis
      • DOES NOT store any built artifacts
      • [Is used to validate Pull requests]
    • Master branch build …
      • Versions the code
      • Builds the solution
      • Runs Unit tests
      • Creates the NuGet Packages
      • Stores the created packages (to be picked up by a Release pipeline for publishing to our internal NuGet server)

    Versioning

    So within the Master build we need to do some versioning, this needs to be done to different files to make sure the assemblies and the NuGet packages are ‘stamped’ with the build version.

    We get this version for the build number variable, $(Build.BuildNumber), we use the format $(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)  e.g. 1.2.16123.3

    Where

    • $(Major) and $(Minor) build variables we manage (actually our release pipeline updates the $(Minor) on every successful release to production using a VSTS task)
    • $(Year:yy)$(DayOfYear) gives a date in the form 16123
    • and $(rev:r) is a count of builds on a given day

    We have chosen to use this number format to version both the assemblies and Nuget packages, if you have different plans, such as semantic versioning , you will need to modify this process a bit.

    Assemblies

    The assemblies themselves are easy to version, we just need to set the correct value in their assemblyinfo.cs or assemblyinfo.vb files. I used my Assembly versioning VSTS task to do this

    NuGet Packages

    The packages turn out to be a bit more complex. Using the standard NuGet Packager task there is a checkbox to say to use the build number as the version. This works just fine versioning the actual package, adding the –Version flag to the package command to override the value in the project .nuspec file. However it does not help with managing the versions of any dependant packages in the solution, and here is why. In our build …

    1. AssemblyInfo files updated
    2. The solution is built, so we have version stamped DLLs
    3. We package the first ‘common’ Nuget package (which has no dependencies on other projects in the solution) and it is versioned using the –version setting, not the value in it’s nuspec file.
    4. We package the ‘next’ Nuget package, the package picks up the version from the –version flag (as needed), but it also needs to add a dependency to a specific version of the ‘common’ package. We pass the –IncludeReferencedProjects  argument to make sure this occurs. However, Nuget.exe gets this version number from  the ‘common’ packages .nuspec file NOT the package actually built in the previous step. So we end up with a mismatch.

    The bottom line is we need to manage the version number in the .nuspec file of each package. So more custom VSTS extensions are needed.

    Initially I reused my Update XML file task, passing in some XPath to select the node to update, and this is a very valid approach if using semantic versioning as it is a very flexible way yo build the version number. However, in the end I added an extra task to my versioning VSTS extension for Nuget to make my build neater and consistent with my other versions steps.

    Once all the versioning was done I could create the packages. I ended up with a build process as shown below

    image

    A few notes about the NuGet packaging

    • Each project I wish to create a Nuget package for has a nuspec file of the same ‘root’ name in the same folder as the csproj eg. mypackage.csproj and mypackage.nuspec. This file contains all descriptions, copyright details etc.
    • I am building each package explicitly, I could use wildcards in the ‘Path/Pattern to nuspec files’ property, I choose not to at this time. This is down to the fact I don’t want to build all the solution’s package at this point in time.
    • IMPORTANT I am passing in the .csproj file names, not the .nuspec file names to the ‘Path/Pattern to nuspec files’ property. I found I had to do this else the   –IncludeReferencedProjects  was ignored. The Nuget documentation seems to suggest as long as the .csproj and .nuspec files have the same ‘root’ name then you could reference the .nuspec file but this was not my experience
    • I still set the flag to use the build version to version the package – this is not actually needed as the .nuspec file has already been update
    • I pass in the  –IncludeReferencedProjects  argument via the advanced parameters, to pick up the project dependancies.

    Summary

    So now I have a reliable way to make sure my NuGet packages have consistent version numbers 

    Windows 10 Anniversary (Build 1607) messed up my virtual NAT Switch – a fix

    I use a virtual NAT Switch to allow my VMs to talk to the outside world. The way I do this is documented in this post, based on the work of  Thomas Maurer. The upgrade to Windows 10 Anniversary messed this up, just seemed to loose the virtual network completely, VMs failed to start with invalid configurations and would not even start.

    I had to recreate my NATSwitch using Thomas’s revised instructions, but I did have an problem. The final ‘New-NetNat’ command failed with  a ‘The parameter is incorrect.’ error. I think the issue was that there was debris left from the old setup (seems Microsoft removed the NatSwitch interface type). I could find no way to remove the old NATSwitch as it did not appear in the list in PowerShell and there is no UI remove option. So I just ended up disabling it via the UI and this seemed to do the trick

    image

    My VMs seem happy again talking to the outside world

    Life gets better in Visual Studio Code for PowerShell

    I have been using Visual Studio Code for PowerShell development, but got a bit behind on reading release notes. Today I just realised I can make my Integrated Terminal a Code a PowerShell instance.

    In File > Preferences > user Settings (settings.json) enter the following

     

    // Place your settings in this file to overwrite the default settings
    {
         // The path of the shell that the terminal uses on Windows.
        "terminal.integrated.shell.windows": "C:\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"
    }

    Now my terminal is a PowerShell instance, and you can see it has loaded by profile so POSH Git is work as well

     

    image

     

    So I think we have reached the goodbye PowerShell ISE point

    Gotcha’s when developing VSTS Build Extension

    I recently posted on my development process for VSTS Extensions, it has been specifically PowerShell based build ones I have been working on. During this development I have come across a few more gotcha’s that I think are worth mentioning

    32/64 bit

    The VSTS build agent launches PowerShell 64bit (as does the PowerShell command line on dev PC), but VSCode launches it 32bit. Whilst working my StyleCop extension this caused me a problem as StyleCop it seems can only load dictionaries for spell checking based rules when in a 32bit shell. So my Pester tests for the extension worked in VSCode but failed at the command line and within a VSTS build

    After many hours my eventual solution was to put some guard code in my scripts to force a reload in 32bit mode

    param
    (
        [string]$treatStyleCopViolationsErrorsAsWarnings,
        [string]$maximumViolationCount,
        … other params
    )

    if ($env:Processor_Architecture -ne "x86")  
    {
        # Get the command parameters
        $args = $myinvocation.BoundParameters.GetEnumerator() | ForEach-Object {$($_.Value)}
        write-warning 'Launching x86 PowerShell'
        &"$env:windir\syswow64\windowspowershell\v1.0\powershell.exe" -noprofile -executionpolicy bypass -file $myinvocation.Mycommand.path $args
        exit
    }
    write-verbose "Running in $($env:Processor_Architecture) PowerShell"

    ... rest of my code

     

    The downside of this trick is that you can’t pass return values back as you swapped execution process. For the type of things I am doing with VSTS tasks this not an issue as the important data has usually be dropped to a file which is accessible by everything, such as test results.

    For a worked sample of production code and Pester tests see by GitHub repo.

    Using Modules

    In the last post I mentioned the problem when trying to run Pester tests against scripts, the script content is executed. I stupidly did not mention the obvious solution of moving all the code into functions in a PowerShell modules. This makes it easier to write tests for all bar the outer wrapper .PS1 script that is called by the VSTS agent.

    Again see by GitHub repo so a good sample. Note how I have split out the files so that I have

    • A module that contains the functions I can test via Pester
    • A .PS1 script called by VSTS (this will run 64bit) where I deal with interaction with VSTS/TFS
    • An inner PS1 string that we force into 32bit mode as needed (see above)

    Hacking around on your code

    You always get to the point I find when developing things like VSTS build tasks that you want to make some quick change to try something without the full development/build/release cycle. This is in effect the local development stage, it is just build task development makes with awkward. It is hard to fully test a task locally, it need to be deployed within a build

    I have found a way to help here is to use a local build agent, you can then get at the deployed task and edit the .PS1 code. The important bit to node is that the task will not be redeployed so you local ‘hack’ can be tested within a real TFS build without having to increment the task’s version and redeploy.

    Hacky but handy to know.

    You of course do need to make sure you hacked code is eventually put through your formal release process.

    And maybe something or nothings…

    I may have seen these issues, but have not got to the bottom of them, so they may not be real issues

    • The order parameters are declared in a task.json file seems to need to match the order they are declared in the .PS1 file call. I had thought they we associated by name not order, but in one task they all got transposed until I fixed the order.
    • The F5 dev debug cycle is still a little awkward with VSCode, sometime it seems to leave stuff running and you get high CPU utilisation – just restart the VSCode  - the old fix!
    • If using the 32 bit relaunch discussed above write-verbose messages don’t awlays seem to show up in the VSTS log, I assume a –verbose parameter is being lost somewhere, or it is the spawning of another PowerShell instance that cause the problem.

    SO again I hope these tips help with your VSTS extension development

    Running TSLint within SonarQube on a TFS build

    I wanted to add some level of static analysis to our Typescript projects, TSLint being the obvious choice. To make sure it got run as part of our build release process I wanted to wire it into our SonarQube system, this meant using the community TSLintPlugin, which is still pre-release (0.6 preview at the time of writing).

    I followed the installation process for plugin without any problems setting the TSLint path to match our build boxes

    C:\Users\Tfsbuild\AppData\Roaming\npm\node_modules\tslint\bin\tslint

    Within my TFS/VSTS build I added three extra tasks

    image

    • An NPM install to make sure that TSLint was installed in the right folder by running the command ‘install -g tslint typescript ‘
    • A pre-build SonarQube MSBuild task to link to our SonarQube instance
    • A post-build SonarQube MSBuild task to complete the analysis

    Once this build was run with a simple Hello World TypeScript project, I could see SonarQube attempting to do TSLint analysis but failing with the error

    2016-07-05T11:36:02.6425918Z INFO: Sensor com.pablissimo.sonar.TsLintSensor

    2016-07-05T11:36:07.1425492Z ##[error]ERROR: TsLint Err: Invalid option for configuration: tslint.json

    2016-07-05T11:36:07.3612994Z INFO: Sensor com.pablissimo.sonar.TsLintSensor (done) | time=4765ms

    The problem was the build task generated sonar-project.properties file did not contain the path to the TSLint.json file. In the current version of the TSLint plugin this file needs to be managed manually, it is not generated by the SonarQube ruleset. Hence is a file in the source code folder on the build box, a path that the SonarQube server cannot know.

    The Begin Analysis SonarQube for MSBuild task generates the sonar-project.properties, but only adds the entries for MSBuild (as its name suggests). It does nothing related to TsLint plugin or any other plugins.

    The solution was to add the required setting via the advanced properties of the Begin Analysis task i.e. point to the tslint.json file under source control, using a build variable to set the base folder.

    /d:sonar.ts.tslintconfigpath=$(build.sourcesdirectory)\tslint.json

    image

    Once this setting was added I could see the TSLint rules being evaluated and the showing up in the SonarQube analysis.

    Another step to improving our overall code quality through consistent analysis of technical debt.

    Scroll bars in MTM Lab Center had me foxed – User too stupid error

    I thought I had a problem with our TFS Lab Manager setup, 80% of our environments had disappeared. I wondered if it was rights, was it just showing environments I owned? No it was not that.

    Turns our the issue was a UX/Scrollbar issue.

    I had MTM full screen in ‘Test Center’ mode, with a long list of test suites, so long a  scroll bar was needed and I had scrolled to the bottom of the list

    I then switched to ‘Lab Center’ mode, this list was shorter, not needing a scrollbar, but the the pane listing the environments (that had been showing the test suites) was still scrolled to the bottom. The need for the scrollbar was unexpected and I just missed it visually (in my defence it is light grey on white). Exiting and reloading MTM had no effect, the scroll did not reset on a reload or change of Test Plan/Team Project.

    In fact I only realised the solution to the problem when it was pointed out by another member of our team after I asked if they were experiencing issues with Labs; the same had happened to them. Between us we wasted a fair bit of time on this issue!

    Just goes to show how you can miss standard UX signals when you are not expecting them.

    Using Visual Studio Code to develop VSTS Build Tasks with PowerShell and Pester tests

    Background

    I am finding  myself writing a lot of PowerShell at present, mostly for VSTS build extensions. Here I hit a problem (or is it an opportunity for choice?) as to what development environment to use?

    • PowerShell ISE is the ‘best’ experience for debugging a script, but has no source control integration – and it is on all PCs
    • Visual Studio Code has good Git support, but you need to jump through some hoops to get debugging working.
    • Visual Studio PowerShell tools, are just too heavy weight, it is not even in the frame for me for this job.

    So I have found myself getting the basic scripts working in the PowerShell ISE then moving to VS Code to package up the task/extensions as this means writing .JSON too – so awkward

    This gets worse when I want to add Pester based unit tests, I needed a better way of working, and I chose to focus on VS Code

    The PowerShell Extension for VS Code

    Visual Studio Code now supports PowerShell. Once you have installed VS Code you can install the extension as follows

    1. Open the command pallet (Ctrl+Shift+P)
    2. Type “Extension”
    3. Select “Install Extensions”. 
    4. Once the extensions list loads, type PowerShell and press Enter.

    Once this extension is installed you get Intellisense etc. as you would expect. So you have a good editor experience, but we still need a F5 debugging experience.

    Setting up the F5 Debugging experience

    Visual Studio Code can launch any tool to provide a debugging experience. The PowerShell extension provides the tools to get this running for PowerShell.

    I found Keith Hill provided a nice walkthrough with screenshots of the setup, but here is my quick summary

    1. Open VS Code and load a folder structure, for me this usually this will be a Git repo
    2. Assuming the PowerShell extension is installed, goto the debug page in VS Code
    3. Press the cog at the top of the page and a .vscode\launch.json file will be added to the root of the folder structure currently loaded i.e. the root of your Git repo
    4. As Keith points out the important line, the program, the file/task to run when you press F5 is empty – a strange empty default.

    image

    We need to edit this file to tell it what to run when we press F5. I have decided I have two options and it depends on what I am putting in my Git Repo as to which I use

    • If we want to run the PowerShell file we have in focus in VS Code (at the moment we press F5) then we need the line

                  "program": "${file}"

    • However, I soon released this was not that useful as I wanted to run Pester based tests. I was usually editing a script file but wanted to run a test script. So this meant changing the file in focus prior to pressing F5. In this case I decided it was easier to hard code the program setting to run to a script that ran all the Pester tests in my folder structure

                   "program": "${workspaceRoot}/Extensions/Tests/runtests.ps1"

        Where my script contained the single line to run the tests in the script’s folder and below

                   Invoke-Pester $PSScriptRoot –Verbose

    Note: I have seen some comments that if you edit the launch.json file you need to reload VS Code for it to be read the new value, but this has not been my experience

    So now when I press F5 my Pester tests run, I can debug into them as I want, but that raises some new issues due to the requirements of VSTS build tasks

    Changes to my build task to enable testing

    A VSTS build task is basically a PowerShell script that has some parameters. The problem is I needed to load the .PS1 script to allow any Pester tests to execute functions in the script file. This is done using the form

     

    # Load the script under test
    . "$PSScriptRoot\..\..\..\versioning\versiondacpactask\Update-DacPacVersionNumber.ps1"

    Problem 1: If any of the parameters for the script are mandatory this include fails with errors over missing values. The fix is to make sure that any mandatory parameters are passed or they are not mandatory – I chose the latter as I can make any task parameter ‘required’ in the task.json file

    Problem 2: When you include the script it is executed – not what I wanted at all. I had to put a guard if test at the top of the script to exit if the required parameters were not at least reasonable – I can’t think of a neater solution

    # check if we are in test mode i.e.
    If ($VersionNumber -eq "" -and $path -eq "") {Exit}
    # the rest of my code …..

    Once these changes were made I was able to run the Pester tests with an F5 as I wanted using mocks to help test program flow logic

     

    # Load the script under test
    . "$PSScriptRoot\..\..\..\versioning\versiondacpactask\Update-DacPacVersionNumber.ps1"

    Describe "Use SQL2012 ToolPath settings" {
        Mock Test-Path  {return $false} -ParameterFilter {
                $Path -eq "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120\Microsoft.SqlServer.Dac.Extensions.dll"
            }
        Mock Test-Path  {return $true} -ParameterFilter {
                $Path -eq "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120\Microsoft.SqlServer.Dac.Extensions.dll"
            }    
     
        It "Find DLLs" {
            $path = Get-Toolpath -ToolPath ""
            $path | Should be "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120"
        }
    }

    Summary

    So I think I now have a workable solution with a good IDE with a reasonable F5 debug experience. Ok the PowerShell console in VS Code is not as rich as that in the PowerShell ISE, but I think I can live with that given the quality of the rest of the debug tools.