BM-Bloggers

The blogs of Black Marble staff

Finding it hard to make use of Azure for DevTest?

Announced at Connect() today were a couple of new tools that could really help a team with their DevOps issues when working with VSTS and Azure (and potentially other scenarios too).

  • DevTest Lab is a new set of tooling within the Azure portal that allows the easy management of Test VMs, their creation and management as well as providing a means to control how many VMs team members can create, thus controlling cost. Have a look at Chuck’s post on getting started with DevTest Labs
  • To aid general deployment, have a look that new Release tooling now in public preview. This is based on the same agents as the vNext build system and can provide a great way to formalise your deployment process. Have a looks at Vijay’s post on getting started with the new Release tools

Chrome extension to help with exploratory testing

One of the many interesting announcements at Connect() today was that the new Microsoft Chrome Extension for Exploratory Testing  is  available in the Chrome Store

This is a great tool if you use VSO, sorry VSTS, allowing an easy way to ‘kick the tyres’ on your application, logging any bugs directly back to VSTS as Bug work items.

image

Best of all, it makes it easy to test your application on other platforms with the link to Perfecto Mobile. Just press the device button, login and you can launch a session on a real physical mobile device to continue your exploratory testing.

image

Only down side I can see that that, if like me, you would love this functionality for on-premises TFS we need to wait a while, this first preview only support VSTS.

Why you need to use vNext build tasks to share scripts between builds

Whilst doing a vNext build from a TFVC repository I needed map both my production code branch and a common folder of scripts that I intended to use in a number of builds, so my build workspace was set to

  • Map – $/BM/mycode/main                                     - my production code
  • Map – $/BM/BuildDefinations/vNextScripts  - my shared PowerShell I wish to run in different builds e.g. assembly versioning.

As I wanted this to be a CI build, I also  set the trigger to $/tp1/mycode/main

The problem I found was that with my workspace set as above, the associated changes for the build include anything checked into $/BM and below. Also the source branch was set as $/BM

image

 

To fix this problem I had to remove the mapping to the scripts folder, once this was done  the associated changes shown were only those for my production code area, and the source branch was listed correctly.

But what to do about running my script?

I don’t really want to have to copy a common script to each build, huge potential for error there, and versioning issues if I want a common script in all build. The best solution I found was to take the PowerShell script, in my case the sample assembly versioning script provided for VSO, and package it as a vNext build Task. It took no modification just required the addition of a manifest file. You can find the task on my vNextBuild Github repo

This custom task could then be uploaded to my TFS server and used in all my builds. As it picks up it variable from environment variables it required so configuration, extracting the version number for the build number format.

image

If you wish to use this task, you need to follow the same instructions to setup your development environment as for the VSO Tasks, then

  1. Clone the repo https://github.com/rfennell/vNextBuild.git
  2. In the root of the repo run gulp to build the task
  3. Use tfx to upload the task to your TFS instance

Installing Windows 10 RSAT Tools on EN-GB Media-Installed Systems

This post is an aide memoir so I don’t have to suffer the same annoyance and frustration at what should be an easy task.

I’ve now switched to my Surface Pro 3 as my only system, thanks to the lovely new Pro 4 Type Cover and Surface Dock. That meant that I needed the Remote Server Administration Tools installing. Doing that turned out to be much more of an odyssey that it should have been and I’m writing this in the hope that it will allow others to quickly find the information I struggled to.

The RSAT tools download is, as before, a Windows Update that adds the necessary Windows Features to your installation. The trouble is, that download is EN-US only (really, Microsoft?!). If, like me, you used the EN-GB media to install you’re in a pickle.

Running the installed appears to work – it proceeds with no errors, albeit rather quickly – but the RSAT features were unavailable. I already had a US keyboard on my config (my pro keyboard is US), but that was obviously not enough. I added the US language, but still couldn’t get the installer to work.

I got more information on the problem by following the steps described in a TechNet article on using DISM to install Windows Updates. That led me to a pair of articles on the SysadminTips site about the installation problem, and how to fully add the US language pack to solve it.

It turns out that the EN-GB media doesn’t install the full US-EN language pack files, so when you add the US language it doesn’t add enough stuff into the OS to allow the RSAT tools. Frankly, that’s a mess and I hope Microsoft deal with the issue by releasing multi-language RSAT tools.

Versioning a VSIX package as part of the TFS vNext build (when the source is on GitHub)

I have recently added a CI build to my GitHub stored ParametersXmlAddin VSIX project. I did this using Visual Studio Online’s hosted build service, did you know that this could used to build source from GitHub?

As part of this build I wanted to version stamp the assemblies and the resultant VSIX package. To do the former I used the script documented on MSDN, for the latter I also used the same basic method of extracting the version from the build number as used in the script for versioning assemblies. You can find my VSIX script stored in this repo.

I added both of these scripts in my ParametersXmlAddin project repo’s Script folder and just call them at the start of my build with a pair of PowerShell tasks. As they both get the build number from the environment variables there is no need to pass any arguments.

image

I only wanted to publish the VSIX package. This was done by setting the contents filter on the Publish Build Artifacts task to **\*.vsix

image

The final step was to enable the badge for the build, this is done on the General tab. Once enabled, I copied the provided URL for the badge graphics that shows the build status and added this as an image to the Readme.MD file on my repo’s wiki

image

Why can’t I assign a VSO user as having ‘eligible MSDN’ using an AAD work account?

When access VSO you have two authentication options; either a LiveID (or an MSA using it’s newest name) or a Work Account ID (a domain account). The latter is used to provide extra security, so a domain admin can easily control who has access to a whole set of systems. It does assume you have used Azure Active Directory (AAD) that is sync’d with your on premises AD, and that this AAD is used to back your VSO instance. See my previous post on this subject.

If you are doing this the issue you often see is that VSO does not pickup your MSDN subscription because it is linked to an MSA not a work account. This is all solvable, but there are hoops to jump through, more than there should be sometimes.

Basic Process

First you need to link your MSDN account to a Work Account

  • Login to https://msdn.micrsoft.com with the MSA that is associated with your MSDN account.
  • Click on the MSDN subscriptions menu option.
  • Click on the Link to work account  and enter your work ID. Note that it will also set your Microsoft Azure linked work account

 

image


Assuming your work account is listed in your AD/AAD, over in VSO you should now be able to …

  • Login as the VSO administrator
  • Invite any user in the AAD to your VSO instance via the link https://[theaccount].visualstudio.com/_user . A user can be invited as
    • Basic – you get 5 for free
    • Stakeholder – what we fall back to if there is an issue
    • MSDN Subscription – the one we want (in screenshot below the green box shows a user where MSDN has been validated, the red box is a user who has not logged in yet with an account associated with a valid MSDN subscription)

image

  • Once invited a user gets an email so they can login as shown below. Make sure you pick the work account login link (lower left. Note that this is mocked up in the screen shot below as which login options are shown appears in a context sensitive way, only being shown the first time a user connects and if the VSO is AAD backed. If you pick the main login fields (the wrong ones) it will try to login assuming the ID is an MSA, which will not work. This is particularly a confusing issue if you used the same email address for your MSA as your Work Account, more on this in the troubleshooting section

 image

  • On later connections only the work ID login will be shown
  • Once a user has logged in for the first time with the correct ID, the VSO admin should be able to see the MSDN subscription is validated

Troubleshooting

We have seen problem that though the user is in the domain and correctly added to VSO it will not register that the MSDN subscription is active. These steps can help.

  • Make sure in the  https://msdn.microsoft.com portal you have actually linked your work ID. You still need to explicably do this even if your MSA and Work ID use the same email address e.g.   user@domain.com. Using the same email address for both IDs can get confusing, so I would recommend considering you setup your MSA email addresses to not clash with your work ID.
  • When you login to VSO MAKE SURE YOU USE THE WORK ID LOGIN LINK (LHS OF DIALOG UNDER VSO LOGO) TO LOGIN WITH A WORK ID AND NOT THE MAIN LIVEID FIELDS. I can’t stress this enough, especially if you use the same email address  for both the MSA and work account
  • If you still get issues with picking up the MSDN subscription
    • In VSO the admin should set the user to be a basic user
    • In  https://msdn.microsoft.com the user should make sure they did not make any typo's when linking the work account ID
    • The user should sign out of VSO and back in using their work ID, MAKE SURE THEYUSE THE CORRECT WORK ID LOGIN DIALOG. They should see the features available to a basic user
    • The VSO admin should change the role assignment in VSO to be MSDN eligible and it should flip over without a problem. There seems to be no need to logout and back in again.

Note if you assign a new MSA to an MSDN subscription it can take a little while to propagate, if you get issues that activation emails don’t arrive, pause a while and try again later. You can’t do any of this until your can login to MSDN with your MSA.

SonarQube 5.2 released

At my session at DDDNorth I mentioned that some of the settings you needed to configure in SonarQube 5.1, such as DB connection strings for SonarRunner, would not need to be made once 5.2 was release. Well it was released today. Most important changes for we are

  • Server handles all DB connections
  • LDAP support for user authentication

Should make the  install process easier

Optimising IaaS deployments in Azure Resource Templates

Unlike most of my recent posts this one won’t have code in it. Instead I want to talk about concepts and how you should look long and hard at your templates to optimise deployment.

In my previous articles I’ve talked about how nested deployments can help apply sensible structure to your deployments. I’ve also talked about things I’ve learned around what will successfully deploy and what will give errors. Nested deployments are still key, but the continuous cycle of improvements in Azure means I can change my information somewhat around what works well and what is likely to fail. Importantly, that change allows us to drastically improve our deployment time if we have lots of virtual machines.

I’d previously found that unless I nested the extensions for a VM within the JSON of the virtual machine itself, I got lots of random deployment errors. I am happy to now report that situation has improved. The result of that improvement is that we can now separate out the extensions deployed to a virtual machines from the machine itself. That separates the configuration of the VM, which for complex environments almost certainly has a prescribed sequence, from the deployment of  the VM, which almost certainly doesn’t.

To give you a tacit example, in the latest work at Black Marble we are deploying a multi-server environment (DC, ADFS, WAP, SQL, BizTalk, Service Bus and two IIS servers) where we deploy the VMs and configure them. With my original approach, hard-fought to achieve a reliable deploy, each VM was pushed and fully configured in the necessary sequence, domain controller first.

With our new approach we can deploy all eight VMs in that environment simultaneously. We have moved our DSC and Custom Script extensions into separate resource templates and that has allowed some clever sequencing to drastically shorten the time to deploy the environment (currently around fifty minutes!).

We did this by carefully looking at what each step was doing and really focusing on the dependencies:

  • The domain controller VM created a new virtual machine. The DSC extension then installed domain services and certificate services and created the domain. The custom script then created some certificated.
  • The ADFS VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured ADFS.
  • The WAP VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured the proxy for the configured ADFS service.

Hopefully you can see what we saw: Each machine had three phases of configuration and the dependencies were different, giving us three separate sequences:

  1. The VM creations are completely independent. We could do those in parallel to save time.
  2. The DSC configuration for the DC has to be done first, to create the domain. However, the ADFS and WAP servers have DSC that are independent, so we could do those in parallel too.
  3. The custom script configurations have a definite sequence (DC – ADFS – WAP) and the DC script depends on the DC having run it’s DSC configuration first so we have our certificate services.

Once we’ve identified our work streams it’s a simple matter of declaring the dependencies in our JSON.

Top tip: It’s a good idea to list all the dependencies for each resource. Even though the Azure Resource Manager will infer the dependency chain when it parses the template, it’s much easier for humans to look at a full list in each resource to figure out what’s going on.

The end result of this tinkering? We cut our deployment time in half. The really cool bit is that adding more VMs doesn’t add much time to our deploy as it’s the creation of the virtual machines that tends to take longest.

Convert new VM’s dynamic IP address to static with Azure Resource Templates

Over the past few posts on this blog I’ve been documenting the templates I have been working on for Black Marble. In a previous sequence I showed how you can use nested deployments to keep your templates simple and still push out complex environments. The problem with those examples is that they are very fixed in what they do. The templates create a number of virtual machines on a virtual network, with static IP addresses for each machine.

This works well for that deployment, where I have complete control. However, one of my aims is to create a series of templates for virtual machines that my developers can combine themselves to create environments that may be very different in makeup to my original. For example, what if the dev needs more servers? What if they only realise after pushing out four web servers that they need a domain controller? If I can’t guarantee the number or sequence of my servers I can’t use static address on creation.

The answer to this problem is actually really simple and uses the same approach as I described previously when reconfiguring a virtual network to alter the DNS address. I deploy a new virtual machine where the virtual nic for that machine requests a dynamic IP address. I then use a nested deployment to reconfigure that same nic, setting the address type to static and specifying the IP address that it was just given as the intended address. This means that I no longer care what order the Azure fabric creates the virtual machines in. That one key change over my previous template approach has halved the deployment time as I can now create all machines in parallel (the bit that takes the most time) and then configure in sequence as needed.

The markup to do this is very straightforward. First we create our nic:

{
    "name": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
    "type": "Microsoft.Network/networkInterfaces",
    "location": "[parameters('VirtualNetwork').Location]",
    "apiVersion": "2015-06-15",
    "dependsOn": [
    ],
    "tags": {
        "displayName": "DomainControllerNic"
    },
    "properties": {
        "ipConfigurations": [
            {
                "name": "ipconfig1",
                "properties": {
                    "privateIPAllocationMethod": "Dynamic",
                    "subnet": {
                        "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
                    }
                }
            }
        ]
    }
}
You can see that I have set privateIPAllocationMethod to Dynamic.

The we call a nested deployment from our template, passing the IP address of the nic as a parameter. That template will redefine the settings of the nic, so it’s important we pass in all the information we need. If I miss something, that setting will be removed from the nic, so it’s important to be careful here. Notice that I use the reference keyword to access the privateIPAddress address property of the nic.

{
    "name": "SetStaticIP",
    "type": "Microsoft.Resources/deployments",
    "apiVersion": "2015-01-01",
    "dependsOn": [
        "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
        "[concat(parameters('envPrefix'),parameters('vmName'))]",
        "Microsoft.Insights.VMDiagnosticsSettings"
    ],
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[concat(parameters('_artifactsLocation'), '/SetStaticIP.json', parameters('_artifactsLocationSasToken'))]",
            "contentVersion": "1.0.0.0"
        },
        "parameters": {
            "VirtualNetwork": {
                "value": "[parameters('VirtualNetwork')]"
            },
            "VirtualNetworkId": {
                "value": "[parameters('VirtualNetworkId')]"
            },
            "nicName": {
                "value": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]"
            },
            "ipAddress": {
                "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
            }
        }
    }
}

Within the template called by my nested deployment object I use the incoming parameters to reconfigure the nic. I need to change the privateIPAllocationMethod setting to static and pass in the IP address from my parameters.

{
  "name": "[parameters('nicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('VirtualNetwork').Location]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "tags": {
    "displayName": "DomainControllerNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[parameters('ipAddress')]",
          "subnet": {
            "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
          }
        }
      }
    ]
  }
}

Finally, in my virtual machine template I pass the IP address back up the chain as an output so I can use it in other templates if needed (for example, to reconfigure the vNet DNS property with the IP address of my domain controller).

{
  "name": "[parameters('nicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('VirtualNetwork').Location]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "tags": {
    "displayName": "DomainControllerNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[parameters('ipAddress')]",
          "subnet": {
            "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
          }
        }
      }
    ]
  }
}