The blogs of Black Marble staff

Upgrading to SonarQube 5.2 in the land of Windows, MSBuild and TFS

SonarQube released version 5.2 a couple of weeks ago. This enabled some new features that really help if you are working with MSbuild or just on a Windows platform in general. These are detailed in the posts

The new ability to manage users with LDAP is good, but one of the most important for me is the way 5.2 ease the configuration with SQL in integrated security mode. This is mentioned in the upgrade notes; basically it boils down to the fact you get better JDBC drivers with better support for SQL Clustering and security.

We found the upgrade process mostly straight forward

  1. Download SonarQube 5.2 and unzipped it
  2. Replaced the files on your SonarQube server
  3. Edited the file with the correct SQL connection details for an integrated security SQL connection. As we wanted to move to integrated security we did not need to set the sonar.jdbc.username setting.
    Important: One thing was not that clear, if you want to use integrated security you do need the sqljdbc_auth.dll file in a folder on a search path (C:\windows\system32 is an obvious place to keep it). You can find this file on MSDN
  4. Once the server was restarted we ran the http://localhost:9000/setup command and it upgraded our DBs

And that was it, for the upgrade. We could then use the standard SonarQube upgrade features to upgrade our plug-ins and to add the new ones like the  LDAP one.

Once the LDAP plug-in was in place (and the server restarted) we were automatically logged into SonarQube with our Windows AD accounts, so that was easy.

However we hit a problem with the new SonarQube 5.2 architecture and LDAP. The issue was that with 5.2 there is now no requirement for the upgraded 1.0.2 SonarQube MSBuild runner to talk directly to the SonarQube DB, all communication is via the SonarQube server. Obviously the user account that makes the call to the SonarQube server needs to be granted suitable rights. That is fairly obvious, the point we  tripped up on was ‘who was the runner running as?’ I had assumed it was as the build agent account, but this was not the case. As the connection to SonarQube is a TFS managed service, it had its own security credentials. Prior to 5.2 these credentials (other than the SonarQube server URL) had not mattered as the SonarQube runner made it own direct connection to the SonarQube DB. Post 5.2, with no DB connection and LDAP in use,  these service credentials become important. Once we had set these correctly, to a user with suitable rights, we were able to do new SonarQube analysis runs.


One other item of note. The change in architecture with 5.2 means that more work is being done on the server as opposed the client runner. The net effect of this is there is a short delay between runs being completed and the results appearing on the dashboard. Once expect it, it is not an issue, but a worry the first time.

Finding it hard to make use of Azure for DevTest?

Announced at Connect() today were a couple of new tools that could really help a team with their DevOps issues when working with VSTS and Azure (and potentially other scenarios too).

  • DevTest Lab is a new set of tooling within the Azure portal that allows the easy management of Test VMs, their creation and management as well as providing a means to control how many VMs team members can create, thus controlling cost. Have a look at Chuck’s post on getting started with DevTest Labs
  • To aid general deployment, have a look that new Release tooling now in public preview. This is based on the same agents as the vNext build system and can provide a great way to formalise your deployment process. Have a looks at Vijay’s post on getting started with the new Release tools

Chrome extension to help with exploratory testing

One of the many interesting announcements at Connect() today was that the new Microsoft Chrome Extension for Exploratory Testing  is  available in the Chrome Store

This is a great tool if you use VSO, sorry VSTS, allowing an easy way to ‘kick the tyres’ on your application, logging any bugs directly back to VSTS as Bug work items.


Best of all, it makes it easy to test your application on other platforms with the link to Perfecto Mobile. Just press the device button, login and you can launch a session on a real physical mobile device to continue your exploratory testing.


Only down side I can see that that, if like me, you would love this functionality for on-premises TFS we need to wait a while, this first preview only support VSTS.

Why you need to use vNext build tasks to share scripts between builds

Whilst doing a vNext build from a TFVC repository I needed map both my production code branch and a common folder of scripts that I intended to use in a number of builds, so my build workspace was set to

  • Map – $/BM/mycode/main                                     - my production code
  • Map – $/BM/BuildDefinations/vNextScripts  - my shared PowerShell I wish to run in different builds e.g. assembly versioning.

As I wanted this to be a CI build, I also  set the trigger to $/tp1/mycode/main

The problem I found was that with my workspace set as above, the associated changes for the build include anything checked into $/BM and below. Also the source branch was set as $/BM



To fix this problem I had to remove the mapping to the scripts folder, once this was done  the associated changes shown were only those for my production code area, and the source branch was listed correctly.

But what to do about running my script?

I don’t really want to have to copy a common script to each build, huge potential for error there, and versioning issues if I want a common script in all build. The best solution I found was to take the PowerShell script, in my case the sample assembly versioning script provided for VSO, and package it as a vNext build Task. It took no modification just required the addition of a manifest file. You can find the task on my vNextBuild Github repo

This custom task could then be uploaded to my TFS server and used in all my builds. As it picks up it variable from environment variables it required so configuration, extracting the version number for the build number format.


If you wish to use this task, you need to follow the same instructions to setup your development environment as for the VSO Tasks, then

  1. Clone the repo
  2. In the root of the repo run gulp to build the task
  3. Use tfx to upload the task to your TFS instance

Installing Windows 10 RSAT Tools on EN-GB Media-Installed Systems

This post is an aide memoir so I don’t have to suffer the same annoyance and frustration at what should be an easy task.

I’ve now switched to my Surface Pro 3 as my only system, thanks to the lovely new Pro 4 Type Cover and Surface Dock. That meant that I needed the Remote Server Administration Tools installing. Doing that turned out to be much more of an odyssey that it should have been and I’m writing this in the hope that it will allow others to quickly find the information I struggled to.

The RSAT tools download is, as before, a Windows Update that adds the necessary Windows Features to your installation. The trouble is, that download is EN-US only (really, Microsoft?!). If, like me, you used the EN-GB media to install you’re in a pickle.

Running the installed appears to work – it proceeds with no errors, albeit rather quickly – but the RSAT features were unavailable. I already had a US keyboard on my config (my pro keyboard is US), but that was obviously not enough. I added the US language, but still couldn’t get the installer to work.

I got more information on the problem by following the steps described in a TechNet article on using DISM to install Windows Updates. That led me to a pair of articles on the SysadminTips site about the installation problem, and how to fully add the US language pack to solve it.

It turns out that the EN-GB media doesn’t install the full US-EN language pack files, so when you add the US language it doesn’t add enough stuff into the OS to allow the RSAT tools. Frankly, that’s a mess and I hope Microsoft deal with the issue by releasing multi-language RSAT tools.

Versioning a VSIX package as part of the TFS vNext build (when the source is on GitHub)

I have recently added a CI build to my GitHub stored ParametersXmlAddin VSIX project. I did this using Visual Studio Online’s hosted build service, did you know that this could used to build source from GitHub?

As part of this build I wanted to version stamp the assemblies and the resultant VSIX package. To do the former I used the script documented on MSDN, for the latter I also used the same basic method of extracting the version from the build number as used in the script for versioning assemblies. You can find my VSIX script stored in this repo.

I added both of these scripts in my ParametersXmlAddin project repo’s Script folder and just call them at the start of my build with a pair of PowerShell tasks. As they both get the build number from the environment variables there is no need to pass any arguments.


I only wanted to publish the VSIX package. This was done by setting the contents filter on the Publish Build Artifacts task to **\*.vsix


The final step was to enable the badge for the build, this is done on the General tab. Once enabled, I copied the provided URL for the badge graphics that shows the build status and added this as an image to the Readme.MD file on my repo’s wiki


Why can’t I assign a VSO user as having ‘eligible MSDN’ using an AAD work account?

When access VSO you have two authentication options; either a LiveID (or an MSA using it’s newest name) or a Work Account ID (a domain account). The latter is used to provide extra security, so a domain admin can easily control who has access to a whole set of systems. It does assume you have used Azure Active Directory (AAD) that is sync’d with your on premises AD, and that this AAD is used to back your VSO instance. See my previous post on this subject.

If you are doing this the issue you often see is that VSO does not pickup your MSDN subscription because it is linked to an MSA not a work account. This is all solvable, but there are hoops to jump through, more than there should be sometimes.

Basic Process

First you need to link your MSDN account to a Work Account

  • Login to with the MSA that is associated with your MSDN account.
  • Click on the MSDN subscriptions menu option.
  • Click on the Link to work account  and enter your work ID. Note that it will also set your Microsoft Azure linked work account



Assuming your work account is listed in your AD/AAD, over in VSO you should now be able to …

  • Login as the VSO administrator
  • Invite any user in the AAD to your VSO instance via the link https://[theaccount] . A user can be invited as
    • Basic – you get 5 for free
    • Stakeholder – what we fall back to if there is an issue
    • MSDN Subscription – the one we want (in screenshot below the green box shows a user where MSDN has been validated, the red box is a user who has not logged in yet with an account associated with a valid MSDN subscription)


  • Once invited a user gets an email so they can login as shown below. Make sure you pick the work account login link (lower left. Note that this is mocked up in the screen shot below as which login options are shown appears in a context sensitive way, only being shown the first time a user connects and if the VSO is AAD backed. If you pick the main login fields (the wrong ones) it will try to login assuming the ID is an MSA, which will not work. This is particularly a confusing issue if you used the same email address for your MSA as your Work Account, more on this in the troubleshooting section


  • On later connections only the work ID login will be shown
  • Once a user has logged in for the first time with the correct ID, the VSO admin should be able to see the MSDN subscription is validated


We have seen problem that though the user is in the domain and correctly added to VSO it will not register that the MSDN subscription is active. These steps can help.

  • Make sure in the portal you have actually linked your work ID. You still need to explicably do this even if your MSA and Work ID use the same email address e.g. Using the same email address for both IDs can get confusing, so I would recommend considering you setup your MSA email addresses to not clash with your work ID.
  • When you login to VSO MAKE SURE YOU USE THE WORK ID LOGIN LINK (LHS OF DIALOG UNDER VSO LOGO) TO LOGIN WITH A WORK ID AND NOT THE MAIN LIVEID FIELDS. I can’t stress this enough, especially if you use the same email address  for both the MSA and work account
  • If you still get issues with picking up the MSDN subscription
    • In VSO the admin should set the user to be a basic user
    • In the user should make sure they did not make any typo's when linking the work account ID
    • The user should sign out of VSO and back in using their work ID, MAKE SURE THEYUSE THE CORRECT WORK ID LOGIN DIALOG. They should see the features available to a basic user
    • The VSO admin should change the role assignment in VSO to be MSDN eligible and it should flip over without a problem. There seems to be no need to logout and back in again.

Note if you assign a new MSA to an MSDN subscription it can take a little while to propagate, if you get issues that activation emails don’t arrive, pause a while and try again later. You can’t do any of this until your can login to MSDN with your MSA.

SonarQube 5.2 released

At my session at DDDNorth I mentioned that some of the settings you needed to configure in SonarQube 5.1, such as DB connection strings for SonarRunner, would not need to be made once 5.2 was release. Well it was released today. Most important changes for we are

  • Server handles all DB connections
  • LDAP support for user authentication

Should make the  install process easier

Optimising IaaS deployments in Azure Resource Templates

Unlike most of my recent posts this one won’t have code in it. Instead I want to talk about concepts and how you should look long and hard at your templates to optimise deployment.

In my previous articles I’ve talked about how nested deployments can help apply sensible structure to your deployments. I’ve also talked about things I’ve learned around what will successfully deploy and what will give errors. Nested deployments are still key, but the continuous cycle of improvements in Azure means I can change my information somewhat around what works well and what is likely to fail. Importantly, that change allows us to drastically improve our deployment time if we have lots of virtual machines.

I’d previously found that unless I nested the extensions for a VM within the JSON of the virtual machine itself, I got lots of random deployment errors. I am happy to now report that situation has improved. The result of that improvement is that we can now separate out the extensions deployed to a virtual machines from the machine itself. That separates the configuration of the VM, which for complex environments almost certainly has a prescribed sequence, from the deployment of  the VM, which almost certainly doesn’t.

To give you a tacit example, in the latest work at Black Marble we are deploying a multi-server environment (DC, ADFS, WAP, SQL, BizTalk, Service Bus and two IIS servers) where we deploy the VMs and configure them. With my original approach, hard-fought to achieve a reliable deploy, each VM was pushed and fully configured in the necessary sequence, domain controller first.

With our new approach we can deploy all eight VMs in that environment simultaneously. We have moved our DSC and Custom Script extensions into separate resource templates and that has allowed some clever sequencing to drastically shorten the time to deploy the environment (currently around fifty minutes!).

We did this by carefully looking at what each step was doing and really focusing on the dependencies:

  • The domain controller VM created a new virtual machine. The DSC extension then installed domain services and certificate services and created the domain. The custom script then created some certificated.
  • The ADFS VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured ADFS.
  • The WAP VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured the proxy for the configured ADFS service.

Hopefully you can see what we saw: Each machine had three phases of configuration and the dependencies were different, giving us three separate sequences:

  1. The VM creations are completely independent. We could do those in parallel to save time.
  2. The DSC configuration for the DC has to be done first, to create the domain. However, the ADFS and WAP servers have DSC that are independent, so we could do those in parallel too.
  3. The custom script configurations have a definite sequence (DC – ADFS – WAP) and the DC script depends on the DC having run it’s DSC configuration first so we have our certificate services.

Once we’ve identified our work streams it’s a simple matter of declaring the dependencies in our JSON.

Top tip: It’s a good idea to list all the dependencies for each resource. Even though the Azure Resource Manager will infer the dependency chain when it parses the template, it’s much easier for humans to look at a full list in each resource to figure out what’s going on.

The end result of this tinkering? We cut our deployment time in half. The really cool bit is that adding more VMs doesn’t add much time to our deploy as it’s the creation of the virtual machines that tends to take longest.