An Introduction To Azure Resource Templates

I have spent a good deal of time over the last month or two building an Azure Resource Template to deploy a relatively complicated IaaS environment. In doing so I’ve hit a variety of problems along the way and I though that a number of blog posts were in order to share what I’ve learned. I will write a detailed post on certain specific servers within the environment shortly. This post will describe Azure Resource Template basics, problems I hit and some decisions I made to overcome issues. Further posts will detail my environment and specific solutions to creating my configuration.

Tooling

  • I started this project using Visual Studio 2013 and the Azure 2.5 .Net SDK. I am now using Visual Studio 2015 and the 2.7 SDK. The SDK is the key – the tooling has improved dramatically, although there are still things it doesn’t do that I would like it to (like proper error checking, for a start). You can find the SDKs on the Azure Downloads site.
  • You will also need the latest Azure PowerShell module. It’s important to keep the SDK and PowerShell current. There is a big change coming in the PowerShell soon when the current situation of switching between service management commands and resource management commands will be removed.
  • Debugging templates is extremely hard. It’s impossible without using Azure Resource Manager (https://resources.azure.com). This is a fantastic tool and you absolutely need to use it.

Documentation

  • The Azure Resource Template documentation is growing steadily and should be your first point of reference to see how things are done.
  • The Azure Quickstart Templates are a great source of inspiration and code if you are starting out. You need to be careful though – some of the samples I started with were clunky and a couple plain didn’t work. More importantly, they don’t necessarily reflect changes in the API. Adding resources should always be done through the tooling (more on that in a bit). If you just want to leap straight to the source code, it’s on GitHub.

Getting Started With Your Deployment Project

Creating a new deployment project is pretty straightforward. In the New Project dialog in Visual Studio you will find the Azure Resource Group project type under Cloud within Visual C#.

new resouce project

When you create a new Azure Resource Group project, the tooling helpfully connects to Azure to offer you a bunch of starting templates. If you want something that’s on the list, simply choose it and your template will be created pre-populated. If you want to start clean, as I normally do, choose Blank Template from the bottom of the list.

new project template

The new project contains a small number of files. My advice is to ignore the Deploy-AzureResourceGroup.ps1 script. It contains some useful snippets, but only works if you run it in a very specific way. The ones you care about are the DeploymentTemplate.json and DeploymentTemplate.param.dev.json files.

solution explorer

The DeploymentTemplate.json is (oddly) your template file where you detail your resources and stuff. The .param.dev.json file is a companion parameter file for the template, for when you want to run the deployment (more on that later).

If you open the deployment template you will see the new JSON Outline window appear.

json outline

I’ll come onto the contents of the template in a moment. For now let’s focus on the JSON Outline. It’s your friend for moving around your template, adding and remove resources. To add a new resource, click the little package icon with a plus on it, top left of the window.

new resource dialog

When you click the icon, the tooling talks to Azure to get the latest versions of resources. The tooling here is intelligent. In the screenshot above you can see I’m adding a Virtual Machine. As a resource, this depends on things like a storage account (to hold the hard drive blobs) and a network. If you already have these defined in your template, they will be listed in the dropdowns. If not, or if you don’t want to use them, you can add new resources and the tooling will step you through answering the questions necessary to specify your resources.

The image below shows the JSON outline after I’ve added a new VM, plus the required storage and network resources. You can see that the tooling has added parameters and variables into the template as well.

json outline with stuff

You can build your template using only the tooling if you like. However, if you want to do something complex or clever you’re going to be hacking this around by hand.

A Few Template Fundamentals

There are a few key points that you need to know about templates and the resources they contain:

  • There is a one to one relationship between a template and a deployment. If you look in the Azure Portal at a Resource Group you will see Last Deployment listed in the Essentials panel at the top of the blade.
    resource blade essentials
    Clicking the link will show the deployments themselves. The history of deployments is kept for a resource group and each deployment can be inspected to see what parameters were specified and what was done.
    deployment history
    deployment details
  • A resource in a template can be specified as being dependent on another resource in the same template. I have tried external dependencies – the templates fail. This is important because you have no control of the execution order of a template other than through dependencies. If you don’t specify any, Azure Resource Manager will try to deploy all the resources in parallel. This is actually a good thing – in the old world of Azure PowerShell it was hard to push out multiple resources in parallel. When you upload a template for deployment, Azure Resource Manager will parse it and work out the deployment order based on the dependencies you prescribe. This means that most deployments will be quicker in the new model.
  • Resources in a template must have unique names. You can only have one resource of a given type with a given name. This is important and has implications for how you achieve certain things.
  • You can nest deployments. What does that mean? You can call a template from another template, passing in parameters. This is really useful. It’s important to remember that template-deployment relationship. If you do nest these things, you’ll see multiple deployments in your Resource Group blade – one per template.
  • If a resource already exists then you can reconfigure it through your template. I’ve not tried this on anything other than a virtual network, but templates define the desired configuration and Azure Resource Manager will try to set it, even it that means changing what’s there already. That’s actually really useful. It means that we can use a nested deployment in our template to reconfigure something part way through our overall deployment.
  • Everything is case-sensitive. This one just keeps on biting me, because I’m a crap typist and the tooling isn’t great at telling me I’ve mistyped something. There’s no IntelliSense in templates yet.

Deploying Your Template to Azure

Right now, deploying your template means using PowerShell to execute the deployment. The New-AzureResourceGroup cmdlet will create a new Resource Group in your subscription. You tell it the name and location of the resource group, the deployment template you want to use, and the values for the template parameters. That last bit can be done in three different ways – take your pick:

  • Using the –TemplateParameterFile switch allows you to specify a JSON-format parameters file that provides the required values.
  • PowerShell allows you to specify the parameters as options on the command. For example, if I have a parameter of AdminUsername in my template I can add the –AdminUsername switch to the command and set my value.
  • You can create an array of the parameters and their values and pass it into the command. Go read up on PowerShell splatting to find out more about this.

Being old, my preference is to use the second option – it means I don’t need to keep updating a parameters file and I can read the command I’m executing more easily. PowerShell ninjas would doubtless prefer choice number three!

The code below shows how I deploy my resource template:

$ResourceGroupName = "tuservtesting1" $ResourceGroupLocation = "North Europe" $TemplateFile = "$pwd\DeploymentTemplate.json" $envPrefix = "myenv" $adminUsername = "env-admin" $adminPassword = "MyPassword" $adminPassword = ConvertTo-SecureString $adminPassword -AsPlainText -Force $resourceLocation = "North Europe" $storageAccountType = "Standard_LRS" $artifactsLocation = "http://mystorage.blob.core.windows.net/templates"   # create a new resource group and deploy our template to it, with our params New-AzureResourceGroup -Name $ResourceGroupName `                        -Location $ResourceGroupLocation `                        -TemplateFile $TemplateFile `                        -storageAccountType $storageAccountType `                        -resourceLocation $resourceLocation `                        -adminUsername $adminUsername `                        -adminPassword $adminPassword `                        -envPrefix $envPrefix `                        -_artifactsLocation $ArtifactsLocation `                        -_artifactsLocationSasToken $ArtifactsLocationSasToken `                        -Force -Verbose

I like this approach because I can create scripts that can be used by our TFS Build and Release Management systems that can automatically deploy my environments.

Stuff I’ve Found Out The Hard Way

The environment I’m deploying is complex. If has multiple virtual machines on a shared network. Some of those machines have public IP addresses; most don’t. I need a domain controller, ADFs server and Web Application Proxy (WAP) server and each of those depends on the other, and I need to get files between them. My original template was many hundreds of lines, nearly a hundred variables and half a dozen parameters. It tool over an hour to deploy (if it did) and testing was a nightmare. As a result, I’ve refined my approach to improve readability, testability and deployability:

  • Virtual machine extension resources seem to deploy more reliably if they are within the Virtual Machine markup. No, I don’t know why. You can specify VM extensions at the same level in the template as your Virtual Machines themselves. However, you can choose to declare them in the resources section of the VM itself. My experience is that the latter reliably deploys the VM and extensions. Before I did this I would get random deployment failures of the extensions.
  • Moving the VMs into nested deployments helps readability, testability and reliability. Again, I don’t know why, but my experience is that very large templates suffer random deployment failures. Pulling each VM and it’s linked resources has completely eliminated random failures. I now have a ‘master template’ which creates the core resources (storage account and virtual network in my case) and then nested templates for each VM that contain the VM, the NIC, the VM extensions and, if exposed to the outside world, load balancer and public IP.
    There are pros and cons to this approach. Reliability is a huge pro, with readability a close second – there are far fewer resources and variables to parse. I can also work on a single VM at once, removing the VM from the resource group and re-running the deployment for just that machines – that’s saved me so much time! On the con side, I can’t make resources in one nested deployment depend on those in another. That means I end up deploying my VMs much more in sequence than I necessarily would have otherwise because I can only have one nested deployment depend on another. I can’t get clever and deploy the VMs in parallel but have individual extensions depend on each other to ensure my configuration works. The other con is I have many more files to upload to Azure storage so the deployment can access them – the PowerShell won’t bundle up all the files that are part of a deployment and push them up as a package.
  • Even if you find something useful in a quickstart template, add the resources cleanly through the tooling and then modify. The API moves forwards and a good chunk of the code in the templates is out of date.
  • The JSON tooling doesn’t do much error checking Copy and paste is your friend to make sure things like variable names match.
  • The only way to test this stuff is to deploy it. When the template is uploaded to Azure, Resource Manager parses it for validity before executing the deployment. That’s the only reliable way to check the validity of the template.
  • The only way to see what’s happening with any detail is to use Azure Resource Explorer. With a VM, for example, you can see an InstanceView that shows the current output from the deployment and extensions. I’ll talk more about this when I start documenting each of the VMs in my environment and how I got them working.

Using the customScriptExtension in Azure Resource Templates

Documentation for using the customScriptExtension for Virtual Machines in Azure through Resource Templates is pretty much non-existent at time of writing, and the articles on using it through PowerShell are just plain wrong when it comes to templates. This post is accurate at time of writing and will show you how to deploy PowerShell scripts and resources to an Azure Virtual Machine through a Resource Template.

The code snippet below shows a customScriptExtension pulled from one of my templates.

        {           "type": "Microsoft.Compute/virtualMachines/extensions",           "name": "[concat(variables('vmADFSName'),'/adfsScript')]",           "apiVersion": "2015-05-01-preview",           "location": "[parameters('resourceLocation')]",           "dependsOn": [             "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]",             "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/ADFSserver')]"           ],           "properties": {             "publisher": "Microsoft.Compute",             "type": "CustomScriptExtension",             "typeHandlerVersion": "1.4",             "settings": {               "fileUris": [                 "[concat(parameters('_artifactsLocation'),'/AdfsServer.ps1', parameters('_artifactsLocationSasToken'))]",                 "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",                 "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"               ],               "commandToExecute": "[concat('powershell.exe -file AdfsServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'))]"             }           }         }       ]     }

The most important part is the commandToExecute. The documentation tells you to simply list the PowerShell script (something.ps1) you want to run. This won’t work at all! All the extension does is shell whatever you put in commandToExecute. The default association for .ps1 is notepad. All that will do is run up an instance of our favourite text editor as the system account, so you can’t see it.

The solution is to build a command line for powershell.exe, as you can see in my example. I am launching powershell.exe and telling it to load the AdfsServer.ps1 script file. I then specify parameters for the script within the command line. There is no option to pass parameters in through the extension itself.

The fileUris settings list the resources I want to push into the VM. This must include the script you want to run, along with any other supporting files/modules etc. The markup in my example loads the files from an Azure storage account. I specify the base url in the _artifactsLocation parameter and pass in a SaS token for the storage in the parameter _artifactsLocationSasToken. You could just put a url to a world-readable location in there are drop the access token param.

The dependsOn setting allows us to tell the extension to wait until other items in the resource template have been deployed. In this case I push two other extensions into the VM first.

Be aware that the process executed by the extension runs as the system account. I found very quickly that if I wanted to do anything useful with my PowerShell, I needed to use invoke-command within my script. To do that I need the admin credentials, which you can see pass into the command line as parameters.

Want to know what’s going on? Come to the Black Marble Tech Update

It’s January, which can only mean one thing. It’s time for the annual Black Marble Tech Update. If you’re an IT or development manager, come along to hear the stuff you need to know about Microsoft’s releases and updates last year and what we know so far about what is coming this year.

Tech Updates are hard work to prepare for, but they’re quite exhilarating to present. In the morning, myself, Andy and Andrew will run through the key moves and changes in the Microsoft ecosystem to help IT managers with their strategic planning: What’s coming out of support, what’s got a new release due; in short, what do you need to pay attention to for your organisation’s IT.

In the afternoon Robert, Richard and Steve will be covering what’s important to know in the Microsoft developer landscape and this year they are joined by Martin Beeby from the Microsoft DX team.

You can find the Tech Update on our website as two events (IT Managers in the morning, Developers in the afternoon). Feel free to register for the one that most interests you. Most people stay for the whole day and leave with their heads buzzing.

Why you should attend a Microsoft IT Camp

I’ve posted before about helping out at the IT Camps run by the Microsoft DX team. I’m a fervent supporter of them – they are hands-on days of technical content run by great people who know their stuff and, importantly, they are run around the country.

Next week will see me in Manchester with Ed Baker for the latest instalment. The current series of campus are two days with the first being around Mobile Device Management and the second around extending your datacentre into Azure. I’ll be there for day two to be Ed’s wingman as we take attendees through hands-on labs around virtual machines and talk about virtual networks, Azure Active directory and pretty much anything else Azure-related we get asked about.

Ed and Andrew really enjoy running the camps – they are a great way to talk to you, the customer, about how you use Microsoft’s services. I’m not backwards in coming forward with feedback about what works and what doesn’t in the MS stack and you shouldn’t be either. Unlike regular events, these are technical days for technical people by technical people and that’s what makes them so rewarding to attend and so satisfying to run. Those of us that are involved want the camps to continue and that means people like you need to turn up, enjoy the day and give feedback.

Go read more about them, get registered and come along. And you don’t even have to travel south to do it!

See you there.

Speaking at SQLBits 2015

We’re only half way through January and this year is already busy. I am really excited to be speaking at SQLBits this year!

SQLBits is a conference that various members of the Black Marble team attend regularly and rave about. It’s the event if you want to gain knowledge and insight into all aspects of databases and data management, reporting, BI and more. Microsoft are a platinum sponsor of the event this year and a whole heap of big names are flying in to present.

Why am I, an Azure MVP, speaking at SQLBits? I’ll be talking about virtual networks and virtual machines in Azure. Whilst not directly DB-related, it’s an important topic these days when designing the architecture for your application. Yes, there is the Azure SQL service, but that’s not always the right choice. Sometimes you want to run your DB on a VM. Sometimes you might even want to access data held securely within your organisation, even if your application is in the cloud. As a DB guy, knowing what options are available to you when designing your solution is important.

The Azure community and the SQL community shouldn’t be separate. Cloud is all pervading, but is not a universal fix all. Hopefully sessions like mine can make connections between the two groups for the benefit of all.

Did I mention how big SQLBits is? Go take a look at the agenda, packed full of great content. The run, don’t walk, to the registration page.

See you there!

Get informed with TechDays Online 2015

Block out your diary from February 3rd until February 5th. The great guys at Microsoft DX are running another TechDays Online event and it’s absolutely worth your time. I had the absolute pleasure to be involved last year and will again this year, both in front of and behind the camera.

For those who don’t know about the event, TechDays Online is three days (and one evening, this year) of technical content delivered by MVPs and Microsoft evangelists across a broad range of topics. Whilst you watch the sessions, streaming live through the power of the internet, you can ask questions in the chat channel. Some of those questions may find their way to the speaker during the session, but all will be picked up by a team of experts backstage, fuelled by caffeine and sugar.

Each day covers a different topic area and is lead by one of the DX team. Tuesday is led by Ed Baker and is around Devices and managing a mobile first world. I’ll be talking about Azure Active Directory, Robert is doing a session on Internet of Things and we are joined by a great line up of MVPs. Tuesday evening is An evening with Office 365; Wednesday sees Andrew Fryer curating The Journey to the cloud-first world; finally on Friday, Martin Beeby is in the chair for Multi-device cross platform development. Richard is involved in that final day.

Content on Tuesday is more relevant for IT pros, Wednesday is a mix of IT Pro and developer content and Thursday is aimed at developers. Having said that, I learned something useful across all the days of the last event and I’d urge you to tune in for all three. Hook a PC up to a screen in the office and stream it in the background for everyone if you can’t dedicate time – you won’t regret it!

In addition to presenting sessions, I will be on the chat stream across all three days, chipping in where I can.

Oh, and did I mention that Mary Jo Foley will be there too?

There’s still plenty of time to register at http://aka.ms/techdays2015 so what are you waiting for?

Speaking on DevOps at Future Decoded

I am now going to speaking on the DevOps track at Future Decoded. I’ll be channelling Richard to talk about how our dev-release pipeline is constructed at Black Marble and how the various Microsoft tools that we use could be swapped out for alternatives in a heterogeneous environment.

Whilst this isn’t an area that I usually speak around, it’s something that I am very involved in as Richard and I constantly look to improve our internal practices around development, test and deployment. Big thanks to Susan Smith for inviting me to participate.

If you haven’t come across Future Decoded yet, take a look. It spans a number of days, with the final day being the multi-track technical conference that I will be speaking at, along with such luminaries as Jonathan Noble and the usual DX suspects like Andrew Fryer and Ed Baker.

Hope to see you there!

A week with the Surface Pro 3

Robert unexpectedly (gotta love him!) gave me a surprise present in the form of a Microsoft Surface Pro 3. I’ve now been using it for a week and I thought it was time to put my thoughts into words.

You’ll pry it out of my cold, dead hands

Overall, this is a fantastic bit of kit and it’s the device I have used most at home, for meetings and even sometimes at my desk. The only reason it hasn’t replaced my stalwart ThinkPad X220T is that it has neither the memory nor the storage to run the virtual machines I still need. It’s light, comfortable to hold, has great battery life and the screen is gorgeous.

Specs – good enough?

The model I have is the core i5 with 8Gb of RAM and a 256Gb SSD. It’s quick. It also has ample storage for my needs once I remove VMs from the equation. It’s true – Visual Studio hasn’t been installed yet, but I know from conversations with Robert that I am not space-poor.

It’s quick to boot up – quick enough that I rarely bother with the connected standby and usually shut down fully. It has handled all of my Office-centric tasks without pause, from Word through PowerPoint to the ever-present OneNote. The screen is a pin sharp 2160×1440 which is easy to read when typing (although there are a few apps that appear a little blurry from the display scaling), although as with many other devices, the glossy glass screen can suffer from reflections in very bright sunlight.

I’m also very happy with the keyboard. I’m typing this post in my front room, sat on the sofa with the Pro on my lap. The revised ‘any-position’ kickstand makes it much more comfortable than the Surface and Surface Pro – neither of which I would have endured this process with. The new ‘double fold’ design of the type cover makes it less sit at a better angle than its predecessors. Yes, it still flexes on a single hinge when on my lap, but it does feel more stable than before.

The track pad is also much improved. I now have a collection of covers – touch, type and power, along with the new type cover. The power cover is great fgor battery life but the track pad was an abomination. This one is just fine – it feels good to touch, with enough resistance to the surface texture, and the buttons have a responsive click to them.

Shape and size

The first thing you notice about the Pro 3 is the size of it. It’s no thicker than my original RT and half the thickness of the original Pro. It’s also a different shape, and I think it’s that which makes all the difference. No longer 16:9, the device is very comfortable to use in portrait mode – much better than the Pro, although I tended to use that in portrait too. When you aren’t wanting to type, you naturally stand it on the short edge. Microsoft obviously expects that – the Windows button is on the right hand edge as you look at the tablet when using the type cover.

It’s also really light. Much lighter than the Pro, and it even feels lighter than the RT. I suspect the thickness of the glass helps a great deal, but it’s pretty impressive when you think that they’ve packed the same power as the Pro in to half the weight, half the thickness and managed to increase the battery life at the same time.

Battery Life

I’ve not run exhaustive battery tests, but I can report that I have charged the surface about three times during the week. It lasts all night when reading web pages, using the twitter app and other Windows Store applications; it quite happily ran through a four hour meeting with a copy of Word open (try doing that on a generation 1 Pro) so, thus far I’m impressed. I haven’t yet tried to last a full working day on a charge, though.

The Stylus

I was concerned when Microsoft switched from the Wacom technology used by the older Surface Pro to the new Ntrig active pen. I have been very pleasantly surprised, however. The inking experience is wonderful. The pen has a very soft feel to it – very unlike the Dell Venue 8 Pro and better even than the Wacom. I do miss being able to erase by flipping the pen, but having used the two-button Dell pen for six months now the switch wasn’t an issue. The accuracy of writing is great. Supposedly the distance between the LCD display and the surface of the glass has been reduced and I must say that the feel of writing is good – the lines I draw feel closer to the pen tip than the Dell, certainly.

My one little niggle

I only have one problem, and to be fair it’s pretty minor. One of the things I use the original Pro for is pulling photos off the SD card from my Canon EOS 450D. The new Pro, with it’s better screen would be great for that task. Except I can’t, because the SD card slot present on the Pro has gone, replaced by a MicroSD slot in the same place as on the RT. It makes sense for space, but it’s a bit of a pain. Time to try using a MicroSD card with adapter in my camera, I guess – I don’t really want to carry a USB adapter.

You’d think that I’d miss a physical ethernet port (I don’t – I can use a USB one if I need to ) or bemoan the single USB 3 port (if I’m stuck, my USB 3 ethernet dongle is also a hub, and how often do I need to use more than one USB device, since this thing has a keyboard and track pad!), but it’s the SD card which is the only thing I’ve wished had been present.

A panoply of devices

I’ll admit to being a device fiend. I now have an original Surface RT, a generation one Surface Pro and a Dell Venue 8 Pro. Of those, the RT has been used rarely since I got the Dell, although the Pro was something I would turn to regularly at home to work on, being larger than the Dell and lighter than the X220T (although with the Power Cover on, we could debate that).

Since I got the Pro 3, I haven’t touched anything else. As I said, I still use the X220T, because I have no alternative. Yes, I could run VMs in Azure or on our Hyper-V server, but the neither work without an internet connection and it’s quick and easy to roll forwards and backwards between checkpoints when VMs are on your own machine.

The fact that I haven’t touched the Dell is perhaps the saddest part of this. I find myself reaching for the Pro 3 every time. I am still using OneNote rather than typing or using paper, but the Pro 3 is nicer to write on than the Dell. Whether I will still use the Dell for customer meetings, where the size means I can leave my usual rucksack of equipment behind I have yet to find out, but it’s a telling change.

Dig a little deeper – enterprise ready?

Pretty much the first thing I did with the new device was wipe it clean. We have a Windows 8.1 image at Black Marble that we usually push out with SCCM. I grabbed that image, downloaded the Surface Pro driver pack from Microsoft and used dism to install the drivers into the image. I then deployed that image onto the Pro via USB.

Installation was completely painless, even including the automated installation of some firmware updates that were included in the driver pack. All devices were detected just fine and the end result is a Surface Pro 3 with our Enterprise image, domain-joined and hooked up to our DirectAccess service so I can work anywhere.

I have installed Office, but I will admit to not having used Outlook on this yet. Much of my usage has been in tablet mode and I prefer the Windows 8.1 Mail app over Outlook without the keyboard and trackpad. Office 2013 is not yet fully touch-friendly, whatever they try to tell you.

You know what would make it perfect?

You can see this coming, can’t you? Sure, I could get more storage and horsepower with the top-of-the-line model, but there is no point. The only reason I would need those is if I could have my one wish – 16Gb of RAM.

It’s s terrible thing – no Ultrabooks come with 16Gb of RAM. I don’t need a workstation replacement (like the W530s our consultants use) as I don’t run the number or size of VMs they do. But I, like Richard, do run VMs for demos and working on projects. 8Gb doesn’t cut it. 16Gb would be fine. I firmly believe that there is a market for a 16Gb ultrabook. Or a 16Gb Pro 3. In all honesty, I think I’d be happy with this as my one device, if I could solve the RAM problem. I think that says it all, really.