Getting Remote Desktop Manager 2.7 working sanely with mixed high DPI screens

Updated 3 July 2018 – A colleague, Andy Davidson,  suggested mRemoteNG as an alternative tool to this address this issue. mRemoteNG also has the advantage that it support most major remoting technologies not just RDP, so I am giving that a try for a while.

This is one of those post I do mostly for myself so I don’t forget how I did something, it is all based on answers on SuperUser.Com, I can claim no credit

I have a SurfaceBook (first generation) and when I am in the office it is linked to an external monitor, with a different lower DPI, via a dock. If I use Remote Desktop (MSTSC) as built into Windows 10, I can drag sessions between the two monitors and the DPI shift is handled OK. However, if I use my preferred tool Remote Desktop Manager 2.7 (as it allow me to store all my commonly used RDP settings) I am in DPI hell. I either get huge fonts or microscopic ones. This is bad whether working on the single high DPI laptop screen work with an external screen.

As the SuperUser.Com post states the answer is to change the compatibility settings for the manager by right clicking on the file “C:\Program Files (x86)\Microsoft\Remote Desktop Connection Manager\RDCMan.exe”, selecting compatibility, change high DPI settings, and unchecking high DPI setting override

image

Once this was done, I have readable resolutions on all screens.

Why did I not do a better search months ago?

A workaround for the error ‘TF14061: The workspace ws_1_18;Project Collection Build Service does not exist’ when mapping a TFVC workspace

Whilst writing some training material for VSTS I hit a problem creating a TFVC workspace. I was using VS2017, linking a TFVC Repo to a local folder. I was connecting to the VSTS instance using an MSA.

In Team Explorer, when I came to do a ‘Map & Get’ to map the source locations I got a ‘TF14061: The workspace ws_1_18;Project Collection Build Service does not exist’ error

image

Strange error, which I could see no obvious reason for. Turns out the work around was just to press the ‘Advanced’ link/button and accept the defaults

Still a few spaces left at the Yorkshire Global DevOps BootCamp Venue hosted at Black Marble

There are still a few spaces left at the Yorkshire Global DevOps BootCamp Venue hosted at Black Marble

Come and learn about all things cool in DevOps, including

  • Video keynote by Microsoft
  • Local keynote: Breaking down the Monolith
  • Hackathon/HandsOn DevOps challenges. The hands-on part with be based on a common application where we try to solve as many challenges as possible, including ideas like
    • How to containerize an existing application
    • How to add telemetry (app insights) to the application and gather hypothesis information
    • How to use telemetry to monitor availability
    • How to use feature toggles to move application into production without disrupting end users
    • How to use release gates
    • How to make DB schema changes
    • Use Blue Green Deployments

And there is free lunch too!

To register click here

Where do I put my testing effort?

In the past I have blog on the subject of using advanced unit test mocking tools to ‘mock the unmockable’. It is an interesting question to revisit; how important today are units tests where this form of complex mocking is required?

Of late I have certainly seen a bit of a move towards using more functional style tests; still using unit test frameworks, but relying on APIs as access points with real backend systems such as DBs and WebServices being deployed as test environments.

This practice is made far easier than in the past due to cloud services such as Azure and tools to treat creation of complex environments  as code such as Azure Resource Manager and Azure DevTest Labs. Both myself and my colleague RIk Hepworth have posted widely on  the provisioning of such systems.

However, this type of functional testing is still fairly slow, the environments have to be provisioned from scratch, or spun up from saved images, it all takes time. Hence, there is still the space for fast unit tests, and sometimes, usually due to limitations of legacy codebases that were not designed for testing, there is a need to still ‘mock the un-mockable’.

This is where tools like Typemock Isolator and Microsoft Fakes are still needed. 

It has to be said, both are premium products, you need the top Enterprise SKU of Visual Studio to get Fakes or a Typemock Isolator license for Isolator, but when you have a need them their functionality they are the only option. Whether this be to mock out a product like SharePoint for faster development cycles, or to provide a great base to write unit tests on for a legacy code base prior to refactoring.

As I have said before, for me Typemock Isolator easily has the edge over Microsoft Fakes, the syntax is so much easier to use. Hence, it is great to see the Typemock Isolator being have further extended with updated versions for C++ and now Linux.

So in answer to my own question, testing is a layered process. Where you put your investment is going to be down to your systems needs. It is true, I think we are going to all invest a bit more in functional testing on ‘cheap to build and run’ cloud test labs. But you can’t beat the speed of tools like Typemock for those particularly nasty legacy code bases where it is hard to create a copy of the environment in a modern test lab.

Making sure when you use VSTS build numbers to version Android Packages they can be uploaded to the Google Play Store

Background

I have a VSTS build extension that can apply a VSTS generated build number to Android APK packages. This takes a VSTS build number and generates, and applies, the Version Name (a string) and Version Code (an integer) to the APK file manifest.

The default parameters mean that the behaviour of this task is to assume (using a regular expression) the VSTS build number has at least three fields major.minor.patch e.g. 1.2.3, and uses the 1.2 as the Version Name and the 3 as the Version Code.

Now, it is important to note that the Version Code must be a integer between 1 and 2100000000 and for the Google Play Store it must be incrementing between versions.

So maybe these default parameter values for this task are not the best options?

The problem the way we use the task

When we use the Android Manifest Versioning task for our tuServ Android packages we use different parameter values, but we recently found these values still cause a problem.

Our VSTS build generates  build numbers with four parts $(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)

  • $(Major) – set as a VSTS variable e.g. 1
  • $(Minor) – set as a VSTS variable e.g. 2
  • $(Year:yy)$(DayOfYear) – the day for the year e.g. 18101
  • $(rev:r) – the build count for the build definition for the day e.g. 1

So we end up with build numbers in the form 1.2.18101.1

The Android version task is set in the build to make

  • the Version Number {1}.{2}.{3}.{4}  – 1.2.18101.1
  • the Version Code {1}{2}{3}{4} – 12181011

The problem is if we do more than 9 builds in a day, which is likely due to our continuous integration process, and release one of the later builds to the Google Play store, then the next day any build with a lower revision than 9 cannot be released to the store as its Version Code is lower than the previously published one e.g.

  • day 1 the published build is 1.2.18101.11 so the Version Code is 121810111
  • day 2 the published build is 1.2.18102.1 so the Version Code is
    12181021

So the second Version Code is 10x smaller, hence the package cannot be published.

The Solution

The answer in the end was straightforward and found by one of our engineers Peter (@sarkimedes). It was to change the final block of the VSTS build number to $(rev:rrr), as detailed in the VSTS documentation. Thus zero padding the revision from .1 to .001. This allows up to 1000 builds per day before the problem of altering the Version Code order of magnitude problem occurs. Obviously, if you think you might do more than 1000 internal builds in a day you could zero pack as many digits as you want.

So using the new build version number

  • day 1 the published build is 1.2.18101.011 so the Version Code is
    1218101011
  • day 2 the published build is 1.2.18102.001 so the Version Code is
    1218102001

So a nice fix without any need to alter the Android Manifest Versioning task’s code. However, changing the default Version Code parameter to {1}{2}{3} is probably advisable.

    Major new release of my VSTS Cross Platform Extension to build Release Notes

    Today I have released a major new release, V2, of my VSTS Cross Platform Extension to build release notes. This new version is all down to the efforts of Greg Pakes who has completely re-written the task to use newer VSTS APIs.

    A minor issue is that this re-write has introduced a couple of breaking changes, as detailed below and on the project wiki

    • oAuth script access has to be enabled on the agent running the task

    image

    • There are minor changes in the template format, but for the good, as it means both TFVC and GIT based releases now use a common template format. Samples can be found in the project repo

    Because of the breaking changes, we made the decision to release both V1 and V2 of the task in the same extension package, so not forcing anyone to update unless they wish to. A technique I have not tried before, but seems to work well in testing.

    Hope people still find the task of use and thanks again to Greg for all the work on the extension

    Backing up your TFVC and Git Source from VSTS

    The Issue

    Azure is a highly resilient service, and VSTS has excellent SLAs. However, a question that is often asked is ‘How do I backup my VSTS instance?’.

    The simple answer is you don’t. Microsoft handle keeping the instance up, patched and serviceable. Hence, there is no built in means for you to get a local copy of all your source code, work items or CI/CD definitions. Though there have been requests for such a service.

    This can be an issue for some organisations, particularly for source control, where there can be a need to have a way to keep a private copy of source code for escrow, DR or similar purposes.

    A Solution

    To address this issue I decided to write a PowerShell script to download all the GIT and TFVC source in a VSTS instance. The following tactics were used

    • Using the REST API to download each project’s TFVC code as a ZIP file. The use of a ZIP file avoids any long file path issues, a common problem with larger Visual Studio solutions with complex names
    • Clone each Git repo. I could download single Git branches as ZIP files via the API as per TFVC, but this seemed a poorer solution given how important branches are in Git.

    So that the process was run on regular basis I designed it to be run within a VSTS build. Again here I had choices:

    • To pass in a Personal Access Token (PAT) to provide access rights to read the required source to be backed up. This has the advantage the script can be run inside or outside of a VSTS build. It also means that a single VSTS build can backup other VSTS instances as long as it has a suitable PAT for access
    • To use the System Token already available to the build agent. This makes the script very neat, and PATs won’t expire, but means it only works within a VSTS build, and can only backup the VSTS instance the build is running on.

    I chose the former, so a single scheduled build could backup all my VSTS instances by running the script a number of time with different parameters

    To use this script you just pass in

    • The name of the instance to backup
    • A valid PAT for the named instance
    • The path to backup too, which can be a UNC share assuming the build agent has rights to the location

    What’s Next

    The obvious next step is to convert the PowerShell script to be a VSTS Extension, at this point it would make sense to make it optional to use a provided PAT or the System Access Token.

    Also I could add code to allow a number of multiple cycling backups to be taken e.g. keep the last 3 backups

    These are maybe something for the future, but they don’t really seems a good return in investment at this time to package up a working script as an extensions just for a single VSTS instance.

    Opps, I made that test VSTS extension public by mistake, what do I do now?

    I recently, whilst changing a CI/CD release pipeline, updated what was previously a private version of a VSTS extension in the VSTS Marketplace with a version of the VSIX package set to be public.

    Note, in my CI/CD process I have a private and public version of each extension (set of tasks), the former is used for functional testing within the CD process, the latter is the one everyone can see.

    So, this meant I had two public versions of the same extension, confusing.

    Turns out you can’t change a public extension back to be private, either via the UI or by uploading a corrected VSIX. Also you can’t delete any public extension that has ever been downloaded, and my previously private one had been downloaded once, by me for testing.

    So my only option was to un-publish the previously private extension so only the correct version was visible in the public marketplace.

    This meant I had to also alter my CI/CD process to change the extensionID of my private extension so I could publish a new private version of the extension.

    Luckily, as all the GUIDs for the tasks within the extension did not change once I had installed the new version of the extension I had mispublished in my test VSTS instance my pipeline still worked.

    Only downside is I am left with an un-publish ‘dead’ version listed in my private view of the marketplace. This is not a problem, just does not look ‘neat and tidy’

    Using VSTS Gates to help improve my deployment pipeline of VSTS Extensions to the Visual Studio Marketplace

    My existing VSTS CI/CD process has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using VSTS Release Gates; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests.

    The problem is how to achieve this with the current VSTS gate options?

    What did not work

    My first thought was to use the Invoke HTTP REST API gate, calling the VSTS API https://<your vsts instance name>.visualstudio.com/_apis/distributedtask/tasks/<GUID of Task>. This API call returns a block of JSON containing details about the deployed task visible to the specified VSTS instance. In theory you can parse this data with a JSONPATH query in the gates success criteria parameter to make sure the correct version of the task is deployed e.g. eq($.value[?(@.name == “BuildRetensionTask”)].contributionVersion, “1.2.3”)

    However, there is a problem. At this time the Invoke HTTP REST API gate task does not support the == equality operator in it’s success criteria field. I understand this will be addressed in the future, but the fact it is currently missing is a block to my current needs.

    Next I thought I could write a custom VSTS gate. These are basically ‘run on server’ tasks with a suitably crafted JSON manifest. The problem here is that this type of task does not allow any code (Node.JS or PowerShell) to be run. They only have a limited capability to invoke HTTP APIs or write messages to service bus. So I could not implement the code I needed to process the API response. So another dead end.

    What did work

    The answer, after a suggestion from the VSTS Release Management team at Microsoft, was to try the Azure Function gate.

    To do this I created a new Azure Function. I did this using the Azure Portal, picking the consumption billing model, C# and securing the function with a function key, basically the default options.

    I then added the C# function code (stored in GitHub), to my newly created Azure Function. This function code takes

    • The name of the VSTS instance
    • A personal access token (PAT) to access the VSTS instance
    • The GUID of the task to check for
    • And the version to check for

    It then returns a JSON block with true or false based on whether the required task version can be found. If any of the parameters are invalid an API error is returned

    By passing in this set of arguments my idea was that a single Azure Function could be used to check for the deployment of all my tasks.

    Note: Now I do realise I could also create a release pipeline for the Azure Function, but I chose to just create it via the Azure Portal. I know this is not best practice, but this was just a proof of concept. As usual the danger here is that this proof of concept might be one of those that is too useful and lives forever!

    To use the Azure Function

    Using the Azure function is simple

      • Added an Azure Function gate to a VSTS release
      • Set the URL parameter for the Azure Function. This value can be found from the Azure Portal. Note that you don’t need the Function Code query parameter in the URL as this is provided with the next gate parameter. I chose to use a variable group variable for this parameter so it was easy to reuse between many CD pipelines
      • Set the Function Key parameter for the Azure Function, again you get this from the Azure Portal. This time I used a secure variable group variable
      • Set the Method parameter to POST
      • Set the Header content type as JSON
    {
          "Content-Type": "application/json"
    }
      • Set the Body to contain the details of the VSTS instance and Task to check. This time I used a mixture of variable group variables, release specific variables (the GUID) and environment build/release variables. The key here is I got the version from the primary release artifact $(BUILD.BUILDNUMBER) so the correct version of the tasks is tested for automatically
    {
         "instance": "$(instance)",
         "pat": "$(pat)",
         "taskguid": "$(taskGuid)",
         "version": "$(BUILD.BUILDNUMBER)"
    }
    • Finally set  the Advanced/Completion Event to ApiResponse with the success criteria of
      eq(root['Deployed'], 'true')

    Once this was done I was able to use the Azure function as a VSTS gate as required

    image

    Summary

    So I now have a gate that makes sure that for a given VSTS instance a task of a given version has been deployed.

    If you need this functionality all you need to do is create your own Azure Function instance, drop in my code and configure the VSTS gate appropriately.

    When equality == operator becomes available for JSONPATH in the REST API Gate I might consider a swap back to a basic REST call, it is less complex to setup, but we shall see. The Azure function model does appear to work well