BM-Bloggers

The blogs of Black Marble staff

Why can’t I assign a VSO user as having ‘eligible MSDN’ using an AAD work account?

When access VSO you have two authentication options; either a LiveID (or an MSA using it’s newest name) or a Work Account ID (a domain account). The latter is used to provide extra security, so a domain admin can easily control who has access to a whole set of systems. It does assume you have used Azure Active Directory (AAD) that is sync’d with your on premises AD, and that this AAD is used to back your VSO instance. See my previous post on this subject.

If you are doing this the issue you often see is that VSO does not pickup your MSDN subscription because it is linked to an MSA not a work account. This is all solvable, but there are hoops to jump through, more than there should be sometimes.

Basic Process

First you need to link your MSDN account to a Work Account

  • Login to https://msdn.micrsoft.com with the MSA that is associated with your MSDN account.
  • Click on the MSDN subscriptions menu option.
  • Click on the Link to work account  and enter your work ID. Note that it will also set your Microsoft Azure linked work account

 

image


Assuming your work account is listed in your AD/AAD, over in VSO you should now be able to …

  • Login as the VSO administrator
  • Invite any user in the AAD to your VSO instance via the link https://[theaccount].visualstudio.com/_user . A user can be invited as
    • Basic – you get 5 for free
    • Stakeholder – what we fall back to if there is an issue
    • MSDN Subscription – the one we want (in screenshot below the green box shows a user where MSDN has been validated, the red box is a user who has not logged in yet with an account associated with a valid MSDN subscription)

image

  • Once invited a user gets an email so they can login as shown below. Make sure you pick the work account login link (lower left. Note that this is mocked up in the screen shot below as which login options are shown appears in a context sensitive way, only being shown the first time a user connects and if the VSO is AAD backed. If you pick the main login fields (the wrong ones) it will try to login assuming the ID is an MSA, which will not work. This is particularly a confusing issue if you used the same email address for your MSA as your Work Account, more on this in the troubleshooting section

 image

  • On later connections only the work ID login will be shown
  • Once a user has logged in for the first time with the correct ID, the VSO admin should be able to see the MSDN subscription is validated

Troubleshooting

We have seen problem that though the user is in the domain and correctly added to VSO it will not register that the MSDN subscription is active. These steps can help.

  • Make sure in the  https://msdn.microsoft.com portal you have actually linked your work ID. You still need to explicably do this even if your MSA and Work ID use the same email address e.g.   user@domain.com. Using the same email address for both IDs can get confusing, so I would recommend considering you setup your MSA email addresses to not clash with your work ID.
  • When you login to VSO MAKE SURE YOU USE THE WORK ID LOGIN LINK (LHS OF DIALOG UNDER VSO LOGO) TO LOGIN WITH A WORK ID AND NOT THE MAIN LIVEID FIELDS. I can’t stress this enough, especially if you use the same email address  for both the MSA and work account
  • If you still get issues with picking up the MSDN subscription
    • In VSO the admin should set the user to be a basic user
    • In  https://msdn.microsoft.com the user should make sure they did not make any typo's when linking the work account ID
    • The user should sign out of VSO and back in using their work ID, MAKE SURE THEYUSE THE CORRECT WORK ID LOGIN DIALOG. They should see the features available to a basic user
    • The VSO admin should change the role assignment in VSO to be MSDN eligible and it should flip over without a problem. There seems to be no need to logout and back in again.

Note if you assign a new MSA to an MSDN subscription it can take a little while to propagate, if you get issues that activation emails don’t arrive, pause a while and try again later. You can’t do any of this until your can login to MSDN with your MSA.

SonarQube 5.2 released

At my session at DDDNorth I mentioned that some of the settings you needed to configure in SonarQube 5.1, such as DB connection strings for SonarRunner, would not need to be made once 5.2 was release. Well it was released today. Most important changes for we are

  • Server handles all DB connections
  • LDAP support for user authentication

Should make the  install process easier

Optimising IaaS deployments in Azure Resource Templates

Unlike most of my recent posts this one won’t have code in it. Instead I want to talk about concepts and how you should look long and hard at your templates to optimise deployment.

In my previous articles I’ve talked about how nested deployments can help apply sensible structure to your deployments. I’ve also talked about things I’ve learned around what will successfully deploy and what will give errors. Nested deployments are still key, but the continuous cycle of improvements in Azure means I can change my information somewhat around what works well and what is likely to fail. Importantly, that change allows us to drastically improve our deployment time if we have lots of virtual machines.

I’d previously found that unless I nested the extensions for a VM within the JSON of the virtual machine itself, I got lots of random deployment errors. I am happy to now report that situation has improved. The result of that improvement is that we can now separate out the extensions deployed to a virtual machines from the machine itself. That separates the configuration of the VM, which for complex environments almost certainly has a prescribed sequence, from the deployment of  the VM, which almost certainly doesn’t.

To give you a tacit example, in the latest work at Black Marble we are deploying a multi-server environment (DC, ADFS, WAP, SQL, BizTalk, Service Bus and two IIS servers) where we deploy the VMs and configure them. With my original approach, hard-fought to achieve a reliable deploy, each VM was pushed and fully configured in the necessary sequence, domain controller first.

With our new approach we can deploy all eight VMs in that environment simultaneously. We have moved our DSC and Custom Script extensions into separate resource templates and that has allowed some clever sequencing to drastically shorten the time to deploy the environment (currently around fifty minutes!).

We did this by carefully looking at what each step was doing and really focusing on the dependencies:

  • The domain controller VM created a new virtual machine. The DSC extension then installed domain services and certificate services and created the domain. The custom script then created some certificated.
  • The ADFS VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured ADFS.
  • The WAP VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured the proxy for the configured ADFS service.

Hopefully you can see what we saw: Each machine had three phases of configuration and the dependencies were different, giving us three separate sequences:

  1. The VM creations are completely independent. We could do those in parallel to save time.
  2. The DSC configuration for the DC has to be done first, to create the domain. However, the ADFS and WAP servers have DSC that are independent, so we could do those in parallel too.
  3. The custom script configurations have a definite sequence (DC – ADFS – WAP) and the DC script depends on the DC having run it’s DSC configuration first so we have our certificate services.

Once we’ve identified our work streams it’s a simple matter of declaring the dependencies in our JSON.

Top tip: It’s a good idea to list all the dependencies for each resource. Even though the Azure Resource Manager will infer the dependency chain when it parses the template, it’s much easier for humans to look at a full list in each resource to figure out what’s going on.

The end result of this tinkering? We cut our deployment time in half. The really cool bit is that adding more VMs doesn’t add much time to our deploy as it’s the creation of the virtual machines that tends to take longest.

Convert new VM’s dynamic IP address to static with Azure Resource Templates

Over the past few posts on this blog I’ve been documenting the templates I have been working on for Black Marble. In a previous sequence I showed how you can use nested deployments to keep your templates simple and still push out complex environments. The problem with those examples is that they are very fixed in what they do. The templates create a number of virtual machines on a virtual network, with static IP addresses for each machine.

This works well for that deployment, where I have complete control. However, one of my aims is to create a series of templates for virtual machines that my developers can combine themselves to create environments that may be very different in makeup to my original. For example, what if the dev needs more servers? What if they only realise after pushing out four web servers that they need a domain controller? If I can’t guarantee the number or sequence of my servers I can’t use static address on creation.

The answer to this problem is actually really simple and uses the same approach as I described previously when reconfiguring a virtual network to alter the DNS address. I deploy a new virtual machine where the virtual nic for that machine requests a dynamic IP address. I then use a nested deployment to reconfigure that same nic, setting the address type to static and specifying the IP address that it was just given as the intended address. This means that I no longer care what order the Azure fabric creates the virtual machines in. That one key change over my previous template approach has halved the deployment time as I can now create all machines in parallel (the bit that takes the most time) and then configure in sequence as needed.

The markup to do this is very straightforward. First we create our nic:

{
    "name": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
    "type": "Microsoft.Network/networkInterfaces",
    "location": "[parameters('VirtualNetwork').Location]",
    "apiVersion": "2015-06-15",
    "dependsOn": [
    ],
    "tags": {
        "displayName": "DomainControllerNic"
    },
    "properties": {
        "ipConfigurations": [
            {
                "name": "ipconfig1",
                "properties": {
                    "privateIPAllocationMethod": "Dynamic",
                    "subnet": {
                        "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
                    }
                }
            }
        ]
    }
}
You can see that I have set privateIPAllocationMethod to Dynamic.

The we call a nested deployment from our template, passing the IP address of the nic as a parameter. That template will redefine the settings of the nic, so it’s important we pass in all the information we need. If I miss something, that setting will be removed from the nic, so it’s important to be careful here. Notice that I use the reference keyword to access the privateIPAddress address property of the nic.

{
    "name": "SetStaticIP",
    "type": "Microsoft.Resources/deployments",
    "apiVersion": "2015-01-01",
    "dependsOn": [
        "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]",
        "[concat(parameters('envPrefix'),parameters('vmName'))]",
        "Microsoft.Insights.VMDiagnosticsSettings"
    ],
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[concat(parameters('_artifactsLocation'), '/SetStaticIP.json', parameters('_artifactsLocationSasToken'))]",
            "contentVersion": "1.0.0.0"
        },
        "parameters": {
            "VirtualNetwork": {
                "value": "[parameters('VirtualNetwork')]"
            },
            "VirtualNetworkId": {
                "value": "[parameters('VirtualNetworkId')]"
            },
            "nicName": {
                "value": "[concat(parameters('envPrefix'),parameters('vmName'),'nic')]"
            },
            "ipAddress": {
                "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
            }
        }
    }
}

Within the template called by my nested deployment object I use the incoming parameters to reconfigure the nic. I need to change the privateIPAllocationMethod setting to static and pass in the IP address from my parameters.

{
  "name": "[parameters('nicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('VirtualNetwork').Location]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "tags": {
    "displayName": "DomainControllerNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[parameters('ipAddress')]",
          "subnet": {
            "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
          }
        }
      }
    ]
  }
}

Finally, in my virtual machine template I pass the IP address back up the chain as an output so I can use it in other templates if needed (for example, to reconfigure the vNet DNS property with the IP address of my domain controller).

{
  "name": "[parameters('nicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('VirtualNetwork').Location]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "tags": {
    "displayName": "DomainControllerNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[parameters('ipAddress')]",
          "subnet": {
            "id": "[concat(parameters('VirtualNetworkId'),'/subnets/',parameters('VirtualNetwork').Subnet1Name)]"
          }
        }
      }
    ]
  }
}

Using References and Outputs in Azure Resource Templates

As you work more with Azure Resource Templates you will find that you need to pass information from one resource you have created into another. This is fine if you had the information to begin with within your variables and parameters, but what if it’s something you cannot know before deploy, such as the dynamic IP address of your new VM, or the FQDN of your new public IP address for your service?

The answer is to use References to access properties of other resources within your template. However, if you need to get information between templates then you also need to look at outputs.

A crucial tool in this process is the Azure Resource Explorer (also now available within the Azure Portal – click Browse and look for Resource Explorer) because most often you will need to look at the JSON for your provisioned resource in order to find the specific property you seek.

In the JSON below I am passing the value of the current IP address of the NIC attached to a virtual machine into a nested template as a parameter.

"ipAddress": {
    "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
}

The markup looks complex but isn’t really. The concat bit is building the name of the resource, which I do based on parameters within the resource template. Basically, you specify reference in the same way as you would variable or parameter. You then need to provide the name of the resource you want to reference (the concat markup here, but it could just be ‘mynic’) and then the property you want, using dot notation to work your way down the object tree.

I’ve used the example above for a reason because it covers all the bases you might hit:

  1. When you look at the JSON for the deployed resource you will see a properties section (just as you do in your template). You don’t need to include this in your reference (i.e. mynic.<the property I want>, not mynic.properties.<the property I want>).
  2. My nic can have multiple IP assignments – ipConfigurations is an array – so I am using [0] to look in the first item in that array.
  3. Within the ipConfiguration is another properties object. This time I need to include it in the markup.
  4. Within the properties of the ipConfiguration is an attribute called privateIPAddress, so I specify this.

It is important to remember that I can only use reference to access resources defined within my current template.

So what if I want to pass a value back out of my current template to the one I called it with? That’s what the Outputs section of my template is for, and by and large everything in there will be a reference to a property of a resource the current template has deployed. In the code below I am passing the same IP address back out of my template:

"outputs": {
    "ipAddress": {
        "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]",
        "type": "string"
    }
}

Within my parent template I access that output by using the reference keyword again, this time referencing an output from the template resource. In the example below I am passing the IP address from my domain controller template into another nested deployment that will reconfigure my virtual network.

"parameters": {
    "VirtualNetwork": {
        "value": "[variables('VirtualNetwork')]"
    },
    "DNSaddress": {
        "value": "[reference('DomainController').outputs.ipAddress.value]"
    }
}
Note that this markup requires me to specify .value on the end of the reference to pass the information correctly.

References and outputs are important because they allow you to pass information between resources and nested deployments. They allow you to keep your variable count low and understandable, and your templates small and well defined with nested deployments for complex environments.

Using Objects in Azure Resource Templates

Over the past few weeks I’ve been refactoring and improving the templates that I have been creating for Black Marble to deploy environments in Azure. This is the first post of a few talking about some of the more advanced stuff I’m now doing.

You will remember from my previous posts that within an Azure Resource Template you can define parameters and variables, then use those for the configuration values within your resources. I was finding after a while that the sheer number of parameters and variables I had made the templates hard to read and understand. This was particularly true when my colleagues started to work with thee templates.

The solution I decided on was to collect individual parameters and variables into objects. These allow structures of information to be passed into and within a template. Importantly for me, this approach significantly reduces the number of items listed within the variables and parameters sections of my template, making them easier to read and understand.

Creating objects within the JSON is easy. You can simply declare variables within a hierarchy in your JSON. This is similar to using arrays, but each property can be individually references. Below is a sample from the variables section of my current deployment template:

"VirtualNetwork": {
   "Name": "[concat(parameters('envPrefix'), 'network')]",
   "Location": "[parameters('envLocation')]",
   "Prefix": "192.168.0.0/16",
   "Subnet1Name": "Subnet-1",
   "Subnet1Prefix": "192.168.1.0/24"
},

When passing this into a nested deployment I can simply push the entire object via the parameters block of the nested deployment JSON:
"parameters": {
    "VirtualNetwork": {
        "value": "[variables('VirtualNetwork')]"
    },
    "StorageAccount": {
        "value": "[variables('StorageAccount')]"
    }
}

Within the target template I declare the parameter to be of type Object:

"VirtualNetwork": {
  "type": "object",
  "metadata": {
    "description": "object containing virtual network params"
  }
}

Then to reference an individual property I specify it after the parameter itself using dot notation for the hierarchy of properties:

"subnets": [
  {
    "name": "[parameters('VirtualNetwork').Subnet1Name]",
    "properties": {
      "addressPrefix": "[parameters('VirtualNetwork').Subnet1Prefix]"
    }
  }
]
The end result is a much better structure to my templates, where I am passing blocks of related information around. It’s easier to read, understand and debug.

Useful links from The ART of Modern Azure Deployments

Within a few days of each other I spoke about Azure Resource Templates at both DDDNorth 2015 and Integration Mondays run by the Integration User Group. I’d like to thank all of you who attended both and have been very kind in your feedback afterwards.

As promised, this post contains the useful links from my final slide.

I’ve already written posts on much of the content covered in my talk. However, since I’m currently sat on a transatlantic flight you can expect a series of posts to follow this on topics such as objects in templates, outputs and references.

If you missed my Integration Monday session, the organisers recorded it and you can watch it online.

Azure PowerShell 1.0 Preview
https://azure.Microsoft.com/en-us/blog/azps-1-0-pre/

Azure QuickStart Templates
https://github.com/Azure/azure-quickstart-templates
http://azure.microsoft.com/en-us/documentation/templates/

ARM Template Documentation
https://msdn.microsoft.com/en-us/library/azure/dn835138.aspx

Azure Resource Explorer
https://resources.azure.com/

“Azure Resource Manager DevOps Jumpstart”
https://www.microsoftvirtualacademy.com/en-US/training-courses/azure-resource-manager-devops-jump-start-8413

My DDDNorth session on Technical Debt and SonarQube

Thanks to everyone who came to my session at DDDNorth on SonarQube, hope you found it useful. The links to resources for my session are

And you can find my slides on my GitHub repo https://github.com/rfennell/Presentations

Patterns & Practices Architecture Track at Future Decoded

In case you had not noticed, our MD and Connected Systems MVP Robert Hogg posted about the new Patterns & Practices Architecture Track he is hosting on Day 1 of the Microsoft Future Decoded event  next month in London.

This track is an additional track to the now full Future Decoded. If you are interested in attending then get in touch with enquiries@blackmarble.com, for the attention of Linda, and she can help you out with a special code (if there are still any left). This code will not only give you access to the excellent p&p track on Day One, but also the Keynotes, so please select Day One when you register!

Wraps are off - patterns & practices Architecture Track at Future Decoded!

I am delighted to announce that I am hosting the Azure patterns&practices team in a dedicated invite-only track on Day One of Future Decoded!

I've been pushing to bring the patterns&practices team to the UK for a few years now, and it's excellent that it's finally happening at the Premier UK Microsoft event this November.

We have Christopher Bennage and Masashi Narumoto travelling in from the patterns&practices team in Corp especially for the event.  The p&p team demonstrate how to bring together architecture best practices for the diverse Microsoft technologies into a unified and holistic solution.  They are passionate about discovering, collecting and encouraging best practices, delighting in software craftmanship, helping developers be successful on Microsoft platforms and bringing joy to engineering software!

I know what you're thinking now, Future Decoded is full, how do I get to attend?  Well, if you're interested (and if you're reading this blog it's safe to say you are!), then get in touch with enquiries@blackmarble.com, for the attention of Linda, and she can help you out with a special code (if there are any left!!) - this code will not only give you access to the excellent p&p track on Day One, but also the Keynotes, so please select Day One when you register!