BM-Bloggers

The blogs of Black Marble staff

Running Test Suites within a network Isolated Lab Management environment when using TFS vNext build and release tooling

Updated 27 Sep 2016: Added solutions to known issues

Background

As I have posted many times we make use of TFS Lab Management to provide network isolated dev/test environments. Going forward I see us moving to Azure Dev Labs and/or Azure Stack with ARM templates, but that isn’t going to help me today, especially when I have already made the investment in setting up a Lab Management environments and they are ready to use.

One change we are making now is a move from the old TFS Release Management (2013 generation) to the new VSTS and TFS 2015.2 vNext Release tools. This means I need to be able to trigger automated tests on VMs within Lab Management network isolated environments with a command inside my new build/release process. I have posted on how to do this with the older generation Release Management tools, turns out it is in some ways a little simpler with the newer tooling, no need to fiddle with shadow accounts etal.

My Setup

image

Constraints

The constraints are these

  • I need to be able to trigger tests on the Client VM in the network isolated lab environment. These tests are all defined in automated test suites within Microsoft Test Manager.
  • The network isolated lab already has a TFS Test Agent deployed on all the VMs in the environment linked back to the TFS Test Controller on my corporate domain, these agents are automatically installed and managed, and are handling the ‘magic’ for the network isolation – we can’t fiddle with these without breaking the Labs 
  • The new build/release tools assume that you will auto deploy a 2015 generation Test Agent via a build task as part of the build/release process. This is a new test agent install, so removed any already installed Test Agent – we don’t want this as it breaks the existing agent/network isolation.
  • So my only options to trigger the tests by using TCM (as we did in the past) from some machine in the system. In the past (with the old tools) this had to be within the isolated network environment due to the limitation put in place by the use of shadow accounts.  
  • However, TCM (as shipped with VS 2015) does not ‘understand’ vNext builds, so it can’t seem to find them by definition name/number – we have to find builds by their drop location, and I think this needs to be a UNC share, not a drop back onto the TFS server. So using TCM.EXE (and any wrapper scripts) probably is not going to deliver what I want i.e. the test run associated with a vNext build and/or release.
  • My Solution

    The solution I adopted was to write a PowerShell script that performs the same function as the TCMEXEC.PS1 script that used to be run within the network isolated Labe Environment by the older Release Management products.

    The difference is the old script shelled out to run TCM.EXE, my new version makes calls to the new TFS REST API (and unfortunately also to the older C# API as some features notably those for Lab Management services are not exposed via REST). This script can be run from anywhere, I chose to run it on the TFS vNext build agent, as this is easiest and this machine already had Visual Studio installed so had the TFS C# API available.

    You can find this script on my VSTSPowerShell GitHub Repo.

    The usage of the script is

    TCMReplacement.ps1
          -Collectionuri http://tfsserver.domain.com:8080/tfs/defaultcollection/
    -Teamproject "My Project"
    -testplanname "My test plan" 
    -testsuitename "Automated tests"
    -configurationname "Windows 8"
    -buildid  12345
       -environmentName "Lab V.2.0" 
    -testsettingsname "Test Setting"
    -testrunname "Smoke Tests"
    -testcontroller "mytestcontroller.domain.com"
    -releaseUri "vstfs:///ReleaseManagement/Release/167"
    -releaseenvironmenturi "vstfs:///ReleaseManagement/Environment/247"

    Note

  • The last two parameters are optional, all the others are required. If the last two are not used the test results will not be associated with a release
  • The is also a pollinginterval parameter which default to 10 seconds. The script starts a test run then polls on this interval to see if it has completed.
  • If there are any failed test then the script writes to write-error as the TFS build process sees this is a failed step
  • In some ways I think this script is an improvement over the TCMEXEC script, the old one needed you to know the IDs for many of the settings (loads of poking around in Microsoft Test Manager to find them), I allow the common names of settings to be passed in which I then use to lookup the required values via the APIs (this is where I needed to use the older C# API as I could not find a way to get the Configuration ID, Environment ID or Test Settings ID via REST).

    There is nothing stopping you running this script from the command line, but I think it is more likely to make it part of release pipeline using the PowerShell on local machine task in the build system. When used this way you can get many of the parameters from environment variables. So the command arguments become something like the following (and of course you can make all the string values build variables too if you want)

     

       -Collectionuri $(SYSTEM.TEAMFOUNDATIONCOLLECTIONURI) 
    -Teamproject $(SYSTEM.TEAMPROJECT)
    -testplanname "My test plan"
       -testsuitename "Automated tests"
    -configurationname "Windows 8"
    -buildid  $(BUILD.BUILDID)
      -environmentName "Lab V.2.0"
       -testsettingsname "Test Settings"
    -testrunname "Smoke Tests"
    -testcontroller "mytestcontroller.domain.com"
    -releaseUri $(RELEASE.RELEASEURI)
    -releaseenvironmenturi $(RELEASE.ENVIRONMENTURI)

     

    Obviously this script is potentially a good candidate for a TFS build/release task, but as per my usual practice I will make sure I am happy with it’s operation before wrappering it up into an extension.

    Known Issues

  • If you run the script from the command line targeting a completed build and release the tests run and are shown in the release report as well as on the test tab as we would expect.

    image

    However, if you trigger the test run from within a release pipeline, the test runs OK and you can see the results in the test tab (and MTM), but they are not associated within the release. My guess is because the release had not completed when the data update is made. I am investigating this to try to address the issue.
  • Previously I reported there was a known issue that the test results were associated with the build, but not the release. It turns out this was due to the AD account the build/release agent was running as was missing rights on the TFS server. To fix the problem I made sure the account as configured as follows”":

    Once this was done all the test results appeared where they should

    So hopefully you will find this a useful tool if you are using network isolated environments and TFS build

    If I add a custom field to a VSTS work item type what is it’s name?

    The process customisation options in VSTS are now fairly extensive. You can add fields, states and custom items, making VSTS is ‘very possible’ option for many more people.

    As well as the obvious uses of this customisation such as storing more data or matching your required process, customisation can also aid in migrating work items into VSTS from other VSTS instances, or on-premises TFS.

    Whether using TFS Integration (now with no support – beware) or Martin Hinshelwood’s vsts-data-bulk-editor (an active open source solution so probably a much better choice for most people) as mentioned in my past post you need to add a custom field on the target VSTS server to contain the original work item ID. Commonly called ReflectedWorkItemId

    This can be added in VSTS  add detailed in MSDN

     

    image

    Note: In the case of Martin’s tool the field needs to be a string as it is going to contains a URL not the simple Integer you might expect.

    The small issue you have when you add a custom field is that this UI does not make it clear what the full name of field is. You need to remember that it is in the form <name of custom process>.<field name> e.g.  MigrateScrum.ReflectedWorkItemId.

    If you forget this you can always download the work item definition using the TFS Power Tools to have a look (yes this even works on VSTS).

    image

    Offline Domain Join with Direct Access

    I was recently in the position that I needed to rebuild a workstation at a remote location, but wanted to end up with it joined to the domain, and able to install software via the SCCM Software Center. Enter Offline Domain Join (djoin.exe)!

    Offline Domain Join allows the creation of a machine account and the establishment of a trust relationship between a computer running Windows and a Domain. As part of the process, group policy information can also be transferred to the machine that will be joined to the domain.

    Assuming Direct Access is available, the appropriate group policy information for Direct Access can be transferred as part of the process, and this should then allow the remote machine to establish a connection to the domain and from there all remaining group policy information can be transferred, the Configuration Manager client installed etc.

    Information on ‘djoin.exe’ including examples for use can be found at https://technet.microsoft.com/en-us/library/offline-domain-join-djoin-step-by-step

    My scenario was:

    • The machine account already existed in the correct OU and was a member of the appropriate groups for Direct Access (the machine name had already been used; this was a rebuild) and therefore I needed to use the ‘/reuse’ parameter.
    • The only group policy information I wanted to transfer to the remote machine was for Direct Access. I anticipated that all other group policy information would be transferred automatically once a Direct Access connection had been established.

    In my case, the command I used on the provisioning server were:

    djoin /provision /domain domain.com /machine MyWorkstation /savefile MyWorkstation-blob.txt /reuse /policynames “Direct Access Client”

    The resultant blob should be transferred securely – take note of what the TechNet page says on the matter:

    The base64-encoded metadata blob that is created by the provisioning command contains very sensitive data. It should be treated just as securely as a plaintext password. The blob contains the machine account password and other information about the domain, including the domain name, the name of a domain controller, the security ID (SID) of the domain, and so on. If the blob is being transported physically or over the network, care must be taken to transport it securely.

    On the remote workstation, the command I used was:

    djoin /requestODJ /loadfile MyWorkstation-blob.txt /windowspath %SystemRoot% /localos

    At this point you’re prompted to reboot the workstation. Once the reboot was complete, I left the machine for a few minutes to allow it to establish a connection, then signed in. Everything worked as anticipated and I could log in as a domain user and a Direct Access connection was established. Following a group policy update, the Configuration Manager client was transferred and installed, and a short time later the Software Center became available and I could add software made available from SCCM.

    DPM Protection for Windows 10 Anniversary Edition

    Attempting to add protection to a Windows 10 Anniversary Edition workstation recently failed with the DPM server showing the workstation as ‘unavailable’ when looking at the ‘Production Servers’ list in the console.

    It appears that the upgrade to Anniversary Edition removes a file that the DPM agent relies on, ‘sisbkup.dll’, and that as a consequence the services cannot start on the protected workstation.

    The resolution is to copy the ‘sisbkup.dll’ file from c:\Windows\System32 on an older version of Windows 10 into C:\Windows\System32 on the Anniversary Update machine and then retry the connection from DPM.

    DPM Protection for Windows 10 Anniversary Edition

    Attempting to add protection to a Windows 10 Anniversary Edition workstation recently failed with the DPM server showing the workstation as ‘unavailable’ when looking at the ‘Production Servers’ list in the console.

    It appears that the upgrade to Anniversary Edition removes a file that the DPM agent relies on, ‘sisbkup.dll’, and that as a consequence the services cannot start on the protected workstation.

    The resolution is to copy the ‘sisbkup.dll’ file from c:\Windows\System32 on an older version of Windows 10 into C:\Windows\System32 on the Anniversary Update machine and then retry the connection from DPM.

    Typemock have released official VSTS build extension

    Typemock have just released an official VSTS build extension to run Typemock Isolator based tests. Given there is now an official extension I have decided to deprecate mine, it is still available in the Marketplace but I would recommend using the official one 

    The new Typemock extension includes two tasks

    SmartRunner Task

    The SmartRunner is a unit test runner, that can run nunit and mstest based tests. It handles the deployment of Typemock Isolator.  SmartRunner can run on both Shared and On Premises Agents

    Typemock with VSTests

    This task acts as a wrapper to enable Typemock Isolator and then run your tests via VSTest. This task can only be used with On Premises Agents as the build agent needs to be running with admin privileges.

    Fix for my Docker image create dates being 8 hours in the past

    I have been having a look at Docker for Windows recently. I have been experiencing a problem that when I create a new image the created date/time (as shown with docker images) is 8 hours in the past.

    image

    Turns out the problem seems to be due to putting my Windows 10 laptop into sleep mode. So the process to see the problem is

    1. Create a new Docker image – the create date is correct, the current time
    2. Sleep the PC
    3. Wake up the PC
    4. Check the create date, it is now 8 hours off in the past

    Now the create date is not an issue in itself, but the fact that the time within the Docker images is also off by 8 hours can be, especially when trying to connect to cloud based services. I needed to sort it out

    Turns out the fix is simple, you need to stop and restart the Docker process (or restarting the PC has the same effect as this restarts the Docker process). Why the Docker process ends up 8 hours off, irrespective of the time the PC is slept, I don’t know. Just happy to have a quick fix.

    I am speaking at Microsoft UK TechDays Online event on Azure DevTest Labs

    The registration link for Microsoft UK TechDays Online is now live. This is a 5 day event live broadcast from the Microsoft Campus in Reading. You will be able to view the sessions live at https://channel9.msdn.com/

    The themes for each day are:

    • Monday, 12 September: Explore the world of Data Platform and BOTs
    • Tuesday, 13 September: DevOps in practice
    • Wednesday, 14 September: A day at the Office!
    • Thursday, 15 September: The inside track on Azure and UK Datacenter
    • Friday, 16 September: Find out more about Artificial Intelligence

    I am doing a session on the Thursday on Azure DevTest Labs.

    Hope you find time to watch some or all of the events. For more details see the registration link

    Why have I got a ‘.NETCore50’ and a ‘netcore50’ folder in my nuget package?

    I recently posted on how we were versioning our Nuget packages as part of a release pipeline. In test we noticed that the packages being produced by this process has an extra folder inside them.

    image 

    We expected there to be a netcore50 folder, but not a .NETCore50 folder. Strangely if we build the package locally we only saw the expect netcore50 folder. The addition of this folder did not appear to be causing any problem, but I did want to find out why it had appeared and remove it as it was not needed.

    Turns out the issue was the version of Nuget.exe, the automatically installed version on the on-prem TFS build agent was 3.2, my local copy 3.4. As soon as I upgraded the build box’s nuget.exe version to 3.4 the problem went away