Experiences migrating TFS XML Team Project Templates to Inherited Team Project Templates

You have always been able to customise your Team Projects in TFS, by editing a host of XML files, but it was not a pleasant experience. In VSTS a far more pleasant to use web based inherited customisation model was added, much to, I think, most administrators relief.

If you used the TFS DB migration service you ended up with a VSTS instance full of the the XML style team projects, and you were stuck there, with no way to change these to the new inherited mode, that is until now as Microsoft have released to preview a conversion tool.

I have been trying this tool to migrate all our active XML based team projects to Inherited equivalents, and it has worked very well for me, but I have to say, we don’t have any hugely complex customisations, so your mileage may vary depending on how much you modified your XML based team project templates.

However, I wanted to go further than the basic process as documented

Since moving to VSTS all our new team project have been created  using an inherited template based on Scrum. I wanted to move all our active XML based team project to this standardised template. This took a little extra work.

The basic process is easy as to change Inherited Process you

  1. In the instance admin page https://[instance].visualstudio.com/_settings/process pick the target template
  2. Click on the ellipse (…) and pick ‘Change team project to use ….’
  3. Pick the team project you wish to migrate and you are done

REMEMBER: A really nice touch (with XML and Inherited templates) is that you can just switch back if you don’t like the result, no data is lost when you swap templates, but some of it might be hidden as fields are not shown on the new work item types.

The problem I had was that our old XML based team project templates had different customisation to our current inherited standard. To address this I needed to

  • Adding a few fields to our standard inherited template based on Scrum, for critical legacy customisations
  • In one case a work item types in use in the XML template had no match in our current template. In this case I changed the work item type to it’s equivalent (we had used a variety of ‘flavours of PBI’ so this was not a major problem), adding a tag to the work items to they could be identified, then deleting the offending work item type.
  • I could conceive of a need for more complex remapping for work item types and fields, but I was able to avoid this.

So once done, I was then able to migrate all the team project to my target process template.

So a very nice experience, and one that now means we can make sure all our team projects use the same set of customisation. No longer do we need to worry about customising in XML and Inherited models.

Hello sign in does not work on my first generation Surface Book after a rebuild – fixed

I have just rebuilt my first generation Surface Book from our company standard Windows image. These images are used by all our staff all the time without issue (on Surface and Lenovo devices), so I was not expecting any problems.

I used to rebuild my PC every 6 months or so, but got out the habit when I moved to a model I could not swap the hard drive out of as backup during the process (using a new disk for new install). I got around this by using Disk2VHD, not quite as good as I can’t just swap the disk back in, but I won’t have lost any stray files, even though I always aim to keep data in OneDrive, Source Control or SharePoint, so it should not been an issue anyway.

Anyway, the rebuild went fine, no issues until I tried to enable Hello to login using the camera. The process seemed to start OK, the wizard ran, but after a reboot there was no Hello login option.

After a bit of digging I found that in device manager there were no imaging devices at all – strange as the Camera App and Skype worked OK.

The Internet proved little help, suggesting the usual set of ‘you have a virus install our tool’ but after much more digging around I found that the cameras were all under ‘System Devices’ in device manager. So I then…

  1. Uninstalled them all (front, front IR and back)
  2. Scanned for hardware changes (they reappeared, still in ‘System Devices’
  3. Ran a Windows Update and some new Intel Drivers were downloaded
  4. I could then run the Hello setup wizard again, seems all old settings were lost

That was all much more complex than hoped for. My guess is that the system rebuild changed some firmware in a strange way that makes for the misdetection of the cameras.

Anyway it is working now.

Windows 10: We can’t add this account

A colleague recently started seeing an issue when attempting to add their work account to a Windows 10 device. Following a device re-image (this, as we’ll see becomes important…), a colleague saw the following error reported when attempting to add their work account:

Windows 10 we can't add this account

The full text of the error reads

We can’t add this account. Your organisation’s IT department has a policy that prevents us from adding this work or school account to Windows.

Initially we looked at whether recent policy changes had in fact impacted the ability to add a work account to a Windows 10 device, but were not seeing anything that appeared to impact this. We had other users who were receiving new PCs that were unaffected and had the same policies applied to them. In addition, nothing was showing up in the event viewer folder for Workplace Join on their machine when attempting to add the account.

Realising that the machine had just been reimaged, we checked in Azure AD to view the list of devices. The following is a screen shot for a different device ID, however this was similar to what we saw:

Device list in Azure AD

As can be seen, there are multiple instances of the ‘same’ machine. In each case, the machine has been reimaged and then had the work account added. In each case, Azure AD has obviously assigned a new device ID, hence what appears to be multiple copies of the same machine registered.

Once we’d deleted a few of the ‘old’ machines from the list, the user was able to successfully add their work account to the device.

There are a couple of potential solutions in our scenario:

  1. Periodically check the number of devices registered and trim as appropriate.
  2. Raise the limit of the number of devices that can be registered, either to a larger number, or to ‘unlimited’ in the device settings area of Azure AD.

Azure AD Device Settings

Getting Remote Desktop Manager 2.7 working sanely with mixed high DPI screens

Updated 3 July 2018 – A colleague, Andy Davidson,  suggested mRemoteNG as an alternative tool to this address this issue. mRemoteNG also has the advantage that it support most major remoting technologies not just RDP, so I am giving that a try for a while.

This is one of those post I do mostly for myself so I don’t forget how I did something, it is all based on answers on SuperUser.Com, I can claim no credit

I have a SurfaceBook (first generation) and when I am in the office it is linked to an external monitor, with a different lower DPI, via a dock. If I use Remote Desktop (MSTSC) as built into Windows 10, I can drag sessions between the two monitors and the DPI shift is handled OK. However, if I use my preferred tool Remote Desktop Manager 2.7 (as it allow me to store all my commonly used RDP settings) I am in DPI hell. I either get huge fonts or microscopic ones. This is bad whether working on the single high DPI laptop screen work with an external screen.

As the SuperUser.Com post states the answer is to change the compatibility settings for the manager by right clicking on the file “C:Program Files (x86)MicrosoftRemote Desktop Connection ManagerRDCMan.exe”, selecting compatibility, change high DPI settings, and unchecking high DPI setting override

image

Once this was done, I have readable resolutions on all screens.

Why did I not do a better search months ago?

A workaround for the error ‘TF14061: The workspace ws_1_18;Project Collection Build Service does not exist’ when mapping a TFVC workspace

Whilst writing some training material for VSTS I hit a problem creating a TFVC workspace. I was using VS2017, linking a TFVC Repo to a local folder. I was connecting to the VSTS instance using an MSA.

In Team Explorer, when I came to do a ‘Map & Get’ to map the source locations I got a ‘TF14061: The workspace ws_1_18;Project Collection Build Service does not exist’ error

image

Strange error, which I could see no obvious reason for. Turns out the work around was just to press the ‘Advanced’ link/button and accept the defaults

Still a few spaces left at the Yorkshire Global DevOps BootCamp Venue hosted at Black Marble

There are still a few spaces left at the Yorkshire Global DevOps BootCamp Venue hosted at Black Marble

Come and learn about all things cool in DevOps, including

  • Video keynote by Microsoft
  • Local keynote: Breaking down the Monolith
  • Hackathon/HandsOn DevOps challenges. The hands-on part with be based on a common application where we try to solve as many challenges as possible, including ideas like
    • How to containerize an existing application
    • How to add telemetry (app insights) to the application and gather hypothesis information
    • How to use telemetry to monitor availability
    • How to use feature toggles to move application into production without disrupting end users
    • How to use release gates
    • How to make DB schema changes
    • Use Blue Green Deployments

And there is free lunch too!

To register click here

Where do I put my testing effort?

In the past I have blog on the subject of using advanced unit test mocking tools to ‘mock the unmockable’. It is an interesting question to revisit; how important today are units tests where this form of complex mocking is required?

Of late I have certainly seen a bit of a move towards using more functional style tests; still using unit test frameworks, but relying on APIs as access points with real backend systems such as DBs and WebServices being deployed as test environments.

This practice is made far easier than in the past due to cloud services such as Azure and tools to treat creation of complex environments  as code such as Azure Resource Manager and Azure DevTest Labs. Both myself and my colleague RIk Hepworth have posted widely on  the provisioning of such systems.

However, this type of functional testing is still fairly slow, the environments have to be provisioned from scratch, or spun up from saved images, it all takes time. Hence, there is still the space for fast unit tests, and sometimes, usually due to limitations of legacy codebases that were not designed for testing, there is a need to still ‘mock the un-mockable’.

This is where tools like Typemock Isolator and Microsoft Fakes are still needed. 

It has to be said, both are premium products, you need the top Enterprise SKU of Visual Studio to get Fakes or a Typemock Isolator license for Isolator, but when you have a need them their functionality they are the only option. Whether this be to mock out a product like SharePoint for faster development cycles, or to provide a great base to write unit tests on for a legacy code base prior to refactoring.

As I have said before, for me Typemock Isolator easily has the edge over Microsoft Fakes, the syntax is so much easier to use. Hence, it is great to see the Typemock Isolator being have further extended with updated versions for C++ and now Linux.

So in answer to my own question, testing is a layered process. Where you put your investment is going to be down to your systems needs. It is true, I think we are going to all invest a bit more in functional testing on ‘cheap to build and run’ cloud test labs. But you can’t beat the speed of tools like Typemock for those particularly nasty legacy code bases where it is hard to create a copy of the environment in a modern test lab.

Making sure when you use VSTS build numbers to version Android Packages they can be uploaded to the Google Play Store

Background

I have a VSTS build extension that can apply a VSTS generated build number to Android APK packages. This takes a VSTS build number and generates, and applies, the Version Name (a string) and Version Code (an integer) to the APK file manifest.

The default parameters mean that the behaviour of this task is to assume (using a regular expression) the VSTS build number has at least three fields major.minor.patch e.g. 1.2.3, and uses the 1.2 as the Version Name and the 3 as the Version Code.

Now, it is important to note that the Version Code must be a integer between 1 and 2100000000 and for the Google Play Store it must be incrementing between versions.

So maybe these default parameter values for this task are not the best options?

The problem the way we use the task

When we use the Android Manifest Versioning task for our tuServ Android packages we use different parameter values, but we recently found these values still cause a problem.

Our VSTS build generates  build numbers with four parts $(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)

  • $(Major) – set as a VSTS variable e.g. 1
  • $(Minor) – set as a VSTS variable e.g. 2
  • $(Year:yy)$(DayOfYear) – the day for the year e.g. 18101
  • $(rev:r) – the build count for the build definition for the day e.g. 1

So we end up with build numbers in the form 1.2.18101.1

The Android version task is set in the build to make

  • the Version Number {1}.{2}.{3}.{4}  – 1.2.18101.1
  • the Version Code {1}{2}{3}{4} – 12181011

The problem is if we do more than 9 builds in a day, which is likely due to our continuous integration process, and release one of the later builds to the Google Play store, then the next day any build with a lower revision than 9 cannot be released to the store as its Version Code is lower than the previously published one e.g.

  • day 1 the published build is 1.2.18101.11 so the Version Code is 121810111
  • day 2 the published build is 1.2.18102.1 so the Version Code is
    12181021

So the second Version Code is 10x smaller, hence the package cannot be published.

The Solution

The answer in the end was straightforward and found by one of our engineers Peter (@sarkimedes). It was to change the final block of the VSTS build number to $(rev:rrr), as detailed in the VSTS documentation. Thus zero padding the revision from .1 to .001. This allows up to 1000 builds per day before the problem of altering the Version Code order of magnitude problem occurs. Obviously, if you think you might do more than 1000 internal builds in a day you could zero pack as many digits as you want.

So using the new build version number

  • day 1 the published build is 1.2.18101.011 so the Version Code is
    1218101011
  • day 2 the published build is 1.2.18102.001 so the Version Code is
    1218102001

So a nice fix without any need to alter the Android Manifest Versioning task’s code. However, changing the default Version Code parameter to {1}{2}{3} is probably advisable.

    DDD Submission Issues

    Some people have reported issues submitting sessions for DDD. if you do have a problem, please tweet us a message and we will get in touch

    b.