Where do I put my testing effort?

In the past I have blog on the subject of using advanced unit test mocking tools to ‘mock the unmockable’. It is an interesting question to revisit; how important today are units tests where this form of complex mocking is required?

Of late I have certainly seen a bit of a move towards using more functional style tests; still using unit test frameworks, but relying on APIs as access points with real backend systems such as DBs and WebServices being deployed as test environments.

This practice is made far easier than in the past due to cloud services such as Azure and tools to treat creation of complex environments  as code such as Azure Resource Manager and Azure DevTest Labs. Both myself and my colleague RIk Hepworth have posted widely on  the provisioning of such systems.

However, this type of functional testing is still fairly slow, the environments have to be provisioned from scratch, or spun up from saved images, it all takes time. Hence, there is still the space for fast unit tests, and sometimes, usually due to limitations of legacy codebases that were not designed for testing, there is a need to still ‘mock the un-mockable’.

This is where tools like Typemock Isolator and Microsoft Fakes are still needed. 

It has to be said, both are premium products, you need the top Enterprise SKU of Visual Studio to get Fakes or a Typemock Isolator license for Isolator, but when you have a need them their functionality they are the only option. Whether this be to mock out a product like SharePoint for faster development cycles, or to provide a great base to write unit tests on for a legacy code base prior to refactoring.

As I have said before, for me Typemock Isolator easily has the edge over Microsoft Fakes, the syntax is so much easier to use. Hence, it is great to see the Typemock Isolator being have further extended with updated versions for C++ and now Linux.

So in answer to my own question, testing is a layered process. Where you put your investment is going to be down to your systems needs. It is true, I think we are going to all invest a bit more in functional testing on ‘cheap to build and run’ cloud test labs. But you can’t beat the speed of tools like Typemock for those particularly nasty legacy code bases where it is hard to create a copy of the environment in a modern test lab.

Making sure when you use VSTS build numbers to version Android Packages they can be uploaded to the Google Play Store

Background

I have a VSTS build extension that can apply a VSTS generated build number to Android APK packages. This takes a VSTS build number and generates, and applies, the Version Name (a string) and Version Code (an integer) to the APK file manifest.

The default parameters mean that the behaviour of this task is to assume (using a regular expression) the VSTS build number has at least three fields major.minor.patch e.g. 1.2.3, and uses the 1.2 as the Version Name and the 3 as the Version Code.

Now, it is important to note that the Version Code must be a integer between 1 and 2100000000 and for the Google Play Store it must be incrementing between versions.

So maybe these default parameter values for this task are not the best options?

The problem the way we use the task

When we use the Android Manifest Versioning task for our tuServ Android packages we use different parameter values, but we recently found these values still cause a problem.

Our VSTS build generates  build numbers with four parts $(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)

  • $(Major) – set as a VSTS variable e.g. 1
  • $(Minor) – set as a VSTS variable e.g. 2
  • $(Year:yy)$(DayOfYear) – the day for the year e.g. 18101
  • $(rev:r) – the build count for the build definition for the day e.g. 1

So we end up with build numbers in the form 1.2.18101.1

The Android version task is set in the build to make

  • the Version Number {1}.{2}.{3}.{4}  – 1.2.18101.1
  • the Version Code {1}{2}{3}{4} – 12181011

The problem is if we do more than 9 builds in a day, which is likely due to our continuous integration process, and release one of the later builds to the Google Play store, then the next day any build with a lower revision than 9 cannot be released to the store as its Version Code is lower than the previously published one e.g.

  • day 1 the published build is 1.2.18101.11 so the Version Code is 121810111
  • day 2 the published build is 1.2.18102.1 so the Version Code is
    12181021

So the second Version Code is 10x smaller, hence the package cannot be published.

The Solution

The answer in the end was straightforward and found by one of our engineers Peter (@sarkimedes). It was to change the final block of the VSTS build number to $(rev:rrr), as detailed in the VSTS documentation. Thus zero padding the revision from .1 to .001. This allows up to 1000 builds per day before the problem of altering the Version Code order of magnitude problem occurs. Obviously, if you think you might do more than 1000 internal builds in a day you could zero pack as many digits as you want.

So using the new build version number

  • day 1 the published build is 1.2.18101.011 so the Version Code is
    1218101011
  • day 2 the published build is 1.2.18102.001 so the Version Code is
    1218102001

So a nice fix without any need to alter the Android Manifest Versioning task’s code. However, changing the default Version Code parameter to {1}{2}{3} is probably advisable.

    Major new release of my VSTS Cross Platform Extension to build Release Notes

    Today I have released a major new release, V2, of my VSTS Cross Platform Extension to build release notes. This new version is all down to the efforts of Greg Pakes who has completely re-written the task to use newer VSTS APIs.

    A minor issue is that this re-write has introduced a couple of breaking changes, as detailed below and on the project wiki

    • oAuth script access has to be enabled on the agent running the task

    image

    • There are minor changes in the template format, but for the good, as it means both TFVC and GIT based releases now use a common template format. Samples can be found in the project repo

    Because of the breaking changes, we made the decision to release both V1 and V2 of the task in the same extension package, so not forcing anyone to update unless they wish to. A technique I have not tried before, but seems to work well in testing.

    Hope people still find the task of use and thanks again to Greg for all the work on the extension

    Backing up your TFVC and Git Source from VSTS

    The Issue

    Azure is a highly resilient service, and VSTS has excellent SLAs. However, a question that is often asked is ‘How do I backup my VSTS instance?’.

    The simple answer is you don’t. Microsoft handle keeping the instance up, patched and serviceable. Hence, there is no built in means for you to get a local copy of all your source code, work items or CI/CD definitions. Though there have been requests for such a service.

    This can be an issue for some organisations, particularly for source control, where there can be a need to have a way to keep a private copy of source code for escrow, DR or similar purposes.

    A Solution

    To address this issue I decided to write a PowerShell script to download all the GIT and TFVC source in a VSTS instance. The following tactics were used

    • Using the REST API to download each project’s TFVC code as a ZIP file. The use of a ZIP file avoids any long file path issues, a common problem with larger Visual Studio solutions with complex names
    • Clone each Git repo. I could download single Git branches as ZIP files via the API as per TFVC, but this seemed a poorer solution given how important branches are in Git.

    So that the process was run on regular basis I designed it to be run within a VSTS build. Again here I had choices:

    • To pass in a Personal Access Token (PAT) to provide access rights to read the required source to be backed up. This has the advantage the script can be run inside or outside of a VSTS build. It also means that a single VSTS build can backup other VSTS instances as long as it has a suitable PAT for access
    • To use the System Token already available to the build agent. This makes the script very neat, and PATs won’t expire, but means it only works within a VSTS build, and can only backup the VSTS instance the build is running on.

    I chose the former, so a single scheduled build could backup all my VSTS instances by running the script a number of time with different parameters

    To use this script you just pass in

    • The name of the instance to backup
    • A valid PAT for the named instance
    • The path to backup too, which can be a UNC share assuming the build agent has rights to the location

    What’s Next

    The obvious next step is to convert the PowerShell script to be a VSTS Extension, at this point it would make sense to make it optional to use a provided PAT or the System Access Token.

    Also I could add code to allow a number of multiple cycling backups to be taken e.g. keep the last 3 backups

    These are maybe something for the future, but they don’t really seems a good return in investment at this time to package up a working script as an extensions just for a single VSTS instance.

    Opps, I made that test VSTS extension public by mistake, what do I do now?

    I recently, whilst changing a CI/CD release pipeline, updated what was previously a private version of a VSTS extension in the VSTS Marketplace with a version of the VSIX package set to be public.

    Note, in my CI/CD process I have a private and public version of each extension (set of tasks), the former is used for functional testing within the CD process, the latter is the one everyone can see.

    So, this meant I had two public versions of the same extension, confusing.

    Turns out you can’t change a public extension back to be private, either via the UI or by uploading a corrected VSIX. Also you can’t delete any public extension that has ever been downloaded, and my previously private one had been downloaded once, by me for testing.

    So my only option was to un-publish the previously private extension so only the correct version was visible in the public marketplace.

    This meant I had to also alter my CI/CD process to change the extensionID of my private extension so I could publish a new private version of the extension.

    Luckily, as all the GUIDs for the tasks within the extension did not change once I had installed the new version of the extension I had mispublished in my test VSTS instance my pipeline still worked.

    Only downside is I am left with an un-publish ‘dead’ version listed in my private view of the marketplace. This is not a problem, just does not look ‘neat and tidy’

    Using VSTS Gates to help improve my deployment pipeline of VSTS Extensions to the Visual Studio Marketplace

    My existing VSTS CI/CD process has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using VSTS Release Gates; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests.

    The problem is how to achieve this with the current VSTS gate options?

    What did not work

    My first thought was to use the Invoke HTTP REST API gate, calling the VSTS API https://<your vsts instance name>.visualstudio.com/_apis/distributedtask/tasks/<GUID of Task>. This API call returns a block of JSON containing details about the deployed task visible to the specified VSTS instance. In theory you can parse this data with a JSONPATH query in the gates success criteria parameter to make sure the correct version of the task is deployed e.g. eq($.value[?(@.name == “BuildRetensionTask”)].contributionVersion, “1.2.3”)

    However, there is a problem. At this time the Invoke HTTP REST API gate task does not support the == equality operator in it’s success criteria field. I understand this will be addressed in the future, but the fact it is currently missing is a block to my current needs.

    Next I thought I could write a custom VSTS gate. These are basically ‘run on server’ tasks with a suitably crafted JSON manifest. The problem here is that this type of task does not allow any code (Node.JS or PowerShell) to be run. They only have a limited capability to invoke HTTP APIs or write messages to service bus. So I could not implement the code I needed to process the API response. So another dead end.

    What did work

    The answer, after a suggestion from the VSTS Release Management team at Microsoft, was to try the Azure Function gate.

    To do this I created a new Azure Function. I did this using the Azure Portal, picking the consumption billing model, C# and securing the function with a function key, basically the default options.

    I then added the C# function code (stored in GitHub), to my newly created Azure Function. This function code takes

    • The name of the VSTS instance
    • A personal access token (PAT) to access the VSTS instance
    • The GUID of the task to check for
    • And the version to check for

    It then returns a JSON block with true or false based on whether the required task version can be found. If any of the parameters are invalid an API error is returned

    By passing in this set of arguments my idea was that a single Azure Function could be used to check for the deployment of all my tasks.

    Note: Now I do realise I could also create a release pipeline for the Azure Function, but I chose to just create it via the Azure Portal. I know this is not best practice, but this was just a proof of concept. As usual the danger here is that this proof of concept might be one of those that is too useful and lives forever!

    To use the Azure Function

    Using the Azure function is simple

      • Added an Azure Function gate to a VSTS release
      • Set the URL parameter for the Azure Function. This value can be found from the Azure Portal. Note that you don’t need the Function Code query parameter in the URL as this is provided with the next gate parameter. I chose to use a variable group variable for this parameter so it was easy to reuse between many CD pipelines
      • Set the Function Key parameter for the Azure Function, again you get this from the Azure Portal. This time I used a secure variable group variable
      • Set the Method parameter to POST
      • Set the Header content type as JSON
    {
          "Content-Type": "application/json"
    }
      • Set the Body to contain the details of the VSTS instance and Task to check. This time I used a mixture of variable group variables, release specific variables (the GUID) and environment build/release variables. The key here is I got the version from the primary release artifact $(BUILD.BUILDNUMBER) so the correct version of the tasks is tested for automatically
    {
         "instance": "$(instance)",
         "pat": "$(pat)",
         "taskguid": "$(taskGuid)",
         "version": "$(BUILD.BUILDNUMBER)"
    }
    • Finally set  the Advanced/Completion Event to ApiResponse with the success criteria of
      eq(root['Deployed'], 'true')

    Once this was done I was able to use the Azure function as a VSTS gate as required

    image

    Summary

    So I now have a gate that makes sure that for a given VSTS instance a task of a given version has been deployed.

    If you need this functionality all you need to do is create your own Azure Function instance, drop in my code and configure the VSTS gate appropriately.

    When equality == operator becomes available for JSONPATH in the REST API Gate I might consider a swap back to a basic REST call, it is less complex to setup, but we shall see. The Azure function model does appear to work well

    Fixing a ‘git-lfs filter-process: gif-lfs: command not found’ error in Visual Studio 2017

    I am currently looking at the best way to migrate a large legacy codebase from TFVC to Git. There are a number of ways I could do this, as I have posted about before. Obviously, I have ruled out anything that tries to migrate history as ‘that way hell lies’; if people need to see history they will be able to look at the archived TFVC instance. TFVC and Git are just too different in the way they work to make history migrations worth the effort in my opinion.

    So as part of this migration and re-structuring I am looking at using Git Submodules and Git Large File System (LFS) to help divide the monolithic code base into front-end, back-end and shared service modules; using LFS to manage large media files used in integration test cases.

    From the PowerShell command prompt, using Git 2.16.2, all my trials were successful, I could achieve what I wanted. However when I tried accessing my trial repos using Visual Studio 2017 I saw issues

    Submodules

    Firstly there are known limitations with Git submodules in Visual Studio Team Explorer. At this time you can clone a repo that has submodules, but you cannot manage the relationships between repos or commit to a submodule from inside Visual Studio.

    This is unlike the Git command line, which allows actions to span a parent and child repo with a single command, Git just works it out if you pass the right parameters

    There is a request on UserVoice to add these functions to Visual Studio, vote for it if you think it is important, I have.

    Large File System

    The big problem I had was with LFS, which is meant to work in Visual Studio since 2015.2.

    Again from the command line operations were seamless, I just installed Git 2.16.2 via Chocolaty and got LFS support without installing anything else. So I was able to enable LFS support on a repo

    git lfs install
    git lfs track '*.bin'
    git add .gitattributes

    and manage standard and large (.bin) files without any problems

    However, when I tried to make use of this cloned LFS enabled repo from inside Visual Studio by staging a new large .bin file I got an error ‘git-lfs filter-process: gif-lfs: command not found’

    image

    On reading around this error it suggested that the separate git-lfs package needed to be installed. I did this, making sure that the path to the git-lfs.exe (C:\Program Files\Git LFS) was in my path, but I still had the problem.

    This is where I got stuck and hence needed to get some help from the Microsoft Visual Studio support team.

    After a good deal tracing they spotted the problem. The path to git-lfs.exe was at the end of my rather long PATH list. It seems Visual Studio was truncating this list of paths, so as the error suggested Visual Studio could not find git-lfs.exe.

    It is unclear to me whether the command prompt just did not suffer this PATH length issue, or was using a different means to resolve LFS feature. It should be noted from the command line LFS commands were available as soon as I installed Git 2.16.2. I did not have to add the Git LFS package.

    So the fix was simple, move the entry for ‘C:\Program Files\Git LFS’ to the start of my PATH list and everything worked in Visual Studio.

    It should be noted I really need to look at whether I need everything in my somewhat long PATH list. It’s been too long since I re-paved my laptop, there is a lot of strange bits installed.

    Thanks again to the Visual Studio Support team for getting me unblocked on this.

    Building private VSTS build agents using the Microsoft Packer based agent image creation model

    Background

    Having automated builds is essential to any good development process. Irrespective of the build engine in use, VSTS, Jenkins etc. you need to have a means to create the VMs that are running the builds.

    You can of course do this by hand, but in many ways you are just extending the old ‘it works on my PC – the developer can build it only on their own PC’ problem i.e. it is hard to be sure what version of tools are in use. This is made worse by the fact it is too tempting for someone to remote onto the build VM to update some SDK or tool without anyone else’s knowledge.

    In an endeavour to address this problem we need a means to create our build VMs in a consistent standardised manner i.e a configuration as code model.

    At Black Marble we have been using Lability to build our lab environments and there is no reason we could not use the same system to create our VSTS build agent VMs

    • Creating base VHDs disk images with patched copies of Windows installed (which we update on a regular basis)
    • Use Lability to provision all the required tools – this would need to include all the associated reboots these installers would require. Noting that rebooting and restarting at the correct place, for non DSC based resources, is not Lability’s strongest feature i.e. you have to do all the work in custom code

    However, there is an alternative. Microsoft have made their Packer based method of creating VSTS Azure hosted agents available on GitHub. Hence, it made sense to me to base our build agent creation system on this standardised image; thus allowing easier migration of builds between private and hosted build agent pools whether in the cloud or on premises, due to the fact they had the same tools installed.

    The Basic Process

    To enable this way of working I forked the Microsoft repo and modified the Packer JSON configuration file to build Hyper-V based images as opposed to Azure ones. I aimed to make as few changes as possible to ease the process of keeping my forked repo in sync with future changes to the Microsoft standard build agent. In effect replacing the builder section of the packer configuration and leaving the providers unaltered

    So, in doing this I learnt a few things

    Which ISO to use?

    Make sure you use a current Operating System ISO. First it save time as it is already patched; but more importantly the provider scripts in the Microsoft configuration assume certain Windows features are available for installation (Containers with Docker support specifically) that were not present on the 2016 RTM ISO

    Building an Answer.ISO

    In the sample I found for the Packer hyperv-iso builder the AutoUnattended.XML answers file is provided on an ISO (as opposed to a virtual floppy as floppies are not support on Gen2 HyperV VMs). This means when you edit the answers file you need to rebuild the ISO prior to running Packer.

    The sample script to do this has lines to ‘Enable UEFI and disable Non EUFI’; I found that if these lines of PowerShell were run the answers file was ignored on the ISO. I had to comment them out. It seems an AutoUnattended.XML answers file edited in VSCode is the correct encoding by default.

    I also found that if I ran the PowerShell script to create the ISO from within VSCode’s integrated terminal the ISO builder mkisofs.exe failed with an internal error. However, it worked fine from a default PowerShell windows.

    Installing the .NET 3.5 Feature

    When a provider tried to install the .NET 3.5 feature using the command

    Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature

    it failed.

    Seems this is a bug in Windows 2016 and the workaround is to specify the –Source location on the install media

    Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature -Source “D:\sources\sxs”

    Once the script was modified in this manner it ran without error

    Well how long does it take?

    The Packer process is slow, Microsoft say for an Azure VM it can take up to over 8 hours. A HyperV VM is no faster.

    I also found the process a bit brittle. I had to restart the process a good few times as….

    • I ran out of disk space (no unsurprising this broke the process)
    • The new VM did not get a DHCP assigned IP address when connected to the network via the HyperV Default Switch. A reboot of my HyperV host PC fixed this.
    • Packer decided the VM had rebooted when it had not – usually due to a slow install of some feature or network issues
    • My Laptop went to sleep and caused one of the above problems

    So I have a SysPrep’d VHD now what do I do with it now?

    At this point I have options of what to do with this new exported HyperV image. I could manually create build agent VM instances.

    However, it appeals to me to use this new VHD as a based image for Lability, replacing our default ‘empty patched Operating System’ image creation system, so I have a nice consistent way to provision VMs onto our Hyper-V servers.

    Yorkshire Venue for the Global DevOps BootCamp 2018

    I am really pleased that we at Black Marble are again the first UK location announcing that we are hosting an event on June 16th as part of the 2018 Global DevOps BootCamp. As the event’s site says…

    “The Global DevOps Bootcamp takes place once a year on venues all over the world. The more people joining in, the better it gets! The Global DevOps Bootcamp is a free one-day event hosted by local passionate DevOps communities around the globe and centrally organized by Xpirit & Solidify and sponsored by Microsoft. This event is all about DevOps on the Microsoft Stack. It shows the latest DevOps trends and insights in modern technologies. It is an amazing combination between getting your hands dirty and sharing experience and knowledge in VSTS, Azure, DevOps with other community members.” 

    Global DevOps Bootcamp 2018 @ Black Marble

    For more details of the planned event content see the central Global DevOps Bootcamp site and to register for the Black Marble hosted venue click here