When software attacks!

Thoughts and musings on anything that comes to mind

Speaking at CloudBurst in September

I’ve never been to Sweden, so I’m really looking forward to September, when I’ll be speaking at CloudBurst. Organised by the Swedish Azure User Group (SWAG – love it!), this conference is also streamed and recorded and the sessions will be available on Channel 9. The list of speakers and topics promise some high-quality and interesting sessions and I urge you to attend if you can, and tune in to the live stream if you can’t.

I’ll be spending an hour telling you about Azure Resource Templates: What they are, why you should use them, and I’ll show the work I’ve been doing as an example of a complex deployment.

Complex Azure Template Odyssey Part One: The Environment

Over the past month or two I’ve been creating an Azure Resource Template to deploy and environment which, previously, we’d created old-style PowerShell scripts to deploy. In theory, the Resource Template approach would make the deployment quicker, easier to trigger from tooling like Release Manager and make the code easier to read.

The aim is to deploy a number of servers that will host an application we are developing. This will allow us to easily provision test or demo environments into Azure making as much use of automation as possible. The application itself has a set of system requirements that means I have a good number of tasks to work through:

  1. We need our servers to be domain joined so we can manage security, service accounts etc.
  2. The application uses ADFS for authentication. You don’t just expose ADFS to the internet, so that means we need a Web Application Proxy (WAP) server too.
  3. ADFS, WAP and our application need to use secure connections. We want to be able to deploy lots of these, so things like hostnames and FQDNs for services need to be flexible. That means using our own Certificate Services which we need to deploy.
  4. We need a SQL server for our application’s data. We’ll need some additional drives on this to store data and we need to make sure our service accounts have appropriate access.
  5. Our application is hosted in IIS, so we need a web server as well.
  6. Only servers that host internet-accessible services will get public IP addresses.

We already had scripts to do this the old way. I planned to reuse some of that code, and follow the decisions we made around the environment:

  • All VMs would use standard naming, with an environment-specific prefix. The same prefix would be used for other resources. For example, a prefix of env1 means the storage account is env1storage, the network is env1vnet, the Domain Controller VM is env1dc, etc. The AD domain we created would use the prefix is it’s named (so env1.local).
  • All public IPs would use the Azure-assigned DNS name for our services – no corporate DNS. The prefix would be used in conjunction with role when specifying the name for the cloud service.
  • DSC would be used wherever possible. After that, custom PowerShell scripts would be used. The aim was to configure each machine individually and not use remote PowerShell between servers unless absolutely necessary.

We’d also hit a few problems when creating the old approach, so I hoped to reuse the same solutions:

  • There is very little PowerShell to manage certificate services and certificates. There is an incredibly useful set of modules known as PSPKI which we utilise to create certificate templates and cert requests. This would need to be used in conjunction with our own custom scripts, so it had to be deployed to the VMs somehow.

Azure Resources In The Deployment

Things have actually moved on in terms of the servers I am now deploying (only to get more complex!) but it’s easier to detail the environment as originally planned and successfully deployed.

  • Storage Account. Needed for the hard drives of the multiple virtual machines.
  • Virtual Network. A single subnet for all VMs.
  • Domain Controller
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively. It will add the ADDS and ADCS roles and create the domain.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration. It will create certificate templates and generate certs for services.
  • ADFS Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Add the ADFS role and domain-join the VM.
      • CustomScriptExtension. Will configure ADFS – copying the cert from the DC and creating the federation service.
  • WAP Server
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the WAP service to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Copy the cert from the DC and configure WAP to publish the federation service hosted on the ADFS server.
  • SQL Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual two hard disks to store DBs and logs.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. It turns out the DSC for SQL is pretty good. We can do lots of configuration with it, to the extent of not needing the custom script extension.
  • Web Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.
  • WAP Server 2
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the web server-hosted services to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.

Environment-specific Settings

The nice thing about having a cookie-cutter environment is that there are very few things that will vary between deployments and that means very few parameters in our template. We will need to set the prefix for all our resource names, the location for our resources, and because we want to be flexible we will set the admin username and password.

An Immediate Issue: Network Configuration

Right from the gate we have a problem to solve. When you create a virtual network in Azure, it provides IP addresses to the VMs attached to it. As part of that, the DNS server address is given to the VMs. By default that is an Azure DNS service that allows VMs to resolve external domain names. Our environment will need the servers to be told the IP address of the domain controller as it will provide local DNS services essential to the working of the Active Directory Domain. In our old scripts we simple reconfigured the network to specify the DC’s address after we configured the DC.

In the Resource Template world we can reconfigure the vNet by applying new settings to the resource from our template. However, once we have created the vNet in our template we can’t have another resource with the same name in the same template. The solution is to create another template with our new settings and to call that from our main template as a nested deployment. We can pass the IP address of the DC into that template as a parameter and we can make the nested deployment depend on the DC being deployed, which means it will happen after the DC has been promoted to be the domain controller.

Deployment Order

One of the nicest things about Resource Templates is that when you trigger a deployment, the Azure Resource Manager parses your template and tries to deploy the resources as efficiently as possible. If you need things to deploy in a sequence you need to specify dependencies in your resources, otherwise they will all deploy in parallel.

In this environment, I need to deploy the storage account and virtual network before any of the VMs. They don’t depend on each other however, so can be pushed out first, in parallel.

The DC gets deployed next. I need to fully configure this before any other VMs are created because they need to join our domain, and our network has to be reconfigured to hand out the IP address of the DC.

Once the DC is done, the network gets reconfigured with a nested deployment.

In theory, we should be able to deploy all our other VMs in parallel, providing we can apply our configuration in sequence which should be possible if we set the dependencies correctly for our extension resources (DSC and customScriptExtension).

Configuration for VMs can be mostly in parallel except: The WAP server configuration depends on the ADFS server being fully configured.

Attempt One: Single Template

I spent a long time creating, testing and attempting to debug this as a single template (except for our nested deployment to reconfigure the vNet). Let me spare you the pain by listing the problems:

  • The template is huge: Many hundreds of lines. Apart from being hard to work with, that really slows down the Visual Studio tooling.
  • Right now a single template with lots and lots of resources seems unreliable. I could use an identical template for multiple deployments and I would get random failures deploying different VMs or get a successful deploy with no rhyme or reason to it.
  • Creating VM extension resources with complex dependencies seems to cause deployment failures. At first I used dependencies in the extensions for the VMs outside of the DC to define my deployment order. I realised after some pain that this was much more prone to failure than if I treated the whole VM as a block. I also discovered that placing the markup for the extensions within the resources block of the VM itself improved reliability.
  • A single deployment takes over an hour. That makes debugging individual parts difficult and time-consuming.

Attempt Two: Multiple Nested Deployments

I now have a rock-solid, reliable deployment. I’ve achieved this by moving each VM and it’s linked resources (NIC, Load Balancer, Public IP) into separate templates. I have a master template that calls the ‘children’ with dependencies limited to one or more of the other nested deployments. The storage account and initial vNet deploy are part of the master template.

The upside of this has been manifold: Each template is shorter and simpler with far fewer variables  now each only deploys a single VM. I can also choose to deploy a single ‘child’ if I want to, within an already deployed environment. This allows me to test and debug more quickly and easily.

Problem One: CustomScriptExtension

When I started this journey there was little documentation around and Resource Manager was in preview. I really struggled to get the CustomScriptExtension for VMs working.All I had to work with were examples using PowerShell to add VM extensions and they were just plain wrong for the Resource Template Approach. Leaning on the Linux equivalent and a lot of testing and poking got things sorted and I’ve written up how the extension currently works.

Problem Two: IaaSDiagnostics

Right now, this one isn’t fixed. I am correctly deploying the IaaSDiagnostics extension into the VMs, and it appears to be correctly configured and working properly. However, the VM blades in the Azure Portal are adamant that diagnostics are not configured. This looks like a bug in the Portal and I’m hoping it will be resolved by the team soon.

Configuring the Virtual Machines

That’s about it for talking about the environment as a whole. I’m going to write up some of the individual servers separately as there were multiple hurdles to jump in configuring them. Stay tuned.

An Introduction To Azure Resource Templates

I have spent a good deal of time over the last month or two building an Azure Resource Template to deploy a relatively complicated IaaS environment. In doing so I’ve hit a variety of problems along the way and I though that a number of blog posts were in order to share what I’ve learned. I will write a detailed post on certain specific servers within the environment shortly. This post will describe Azure Resource Template basics, problems I hit and some decisions I made to overcome issues. Further posts will detail my environment and specific solutions to creating my configuration.

Tooling

  • I started this project using Visual Studio 2013 and the Azure 2.5 .Net SDK. I am now using Visual Studio 2015 and the 2.7 SDK. The SDK is the key – the tooling has improved dramatically, although there are still things it doesn’t do that I would like it to (like proper error checking, for a start). You can find the SDKs on the Azure Downloads site.
  • You will also need the latest Azure PowerShell module. It’s important to keep the SDK and PowerShell current. There is a big change coming in the PowerShell soon when the current situation of switching between service management commands and resource management commands will be removed.
  • Debugging templates is extremely hard. It’s impossible without using Azure Resource Manager (https://resources.azure.com). This is a fantastic tool and you absolutely need to use it.

Documentation

  • The Azure Resource Template documentation is growing steadily and should be your first point of reference to see how things are done.
  • The Azure Quickstart Templates are a great source of inspiration and code if you are starting out. You need to be careful though – some of the samples I started with were clunky and a couple plain didn’t work. More importantly, they don’t necessarily reflect changes in the API. Adding resources should always be done through the tooling (more on that in a bit). If you just want to leap straight to the source code, it’s on GitHub.

Getting Started With Your Deployment Project

Creating a new deployment project is pretty straightforward. In the New Project dialog in Visual Studio you will find the Azure Resource Group project type under Cloud within Visual C#.

new resouce project

When you create a new Azure Resource Group project, the tooling helpfully connects to Azure to offer you a bunch of starting templates. If you want something that’s on the list, simply choose it and your template will be created pre-populated. If you want to start clean, as I normally do, choose Blank Template from the bottom of the list.

new project template

The new project contains a small number of files. My advice is to ignore the Deploy-AzureResourceGroup.ps1 script. It contains some useful snippets, but only works if you run it in a very specific way. The ones you care about are the DeploymentTemplate.json and DeploymentTemplate.param.dev.json files.

solution explorer

The DeploymentTemplate.json is (oddly) your template file where you detail your resources and stuff. The .param.dev.json file is a companion parameter file for the template, for when you want to run the deployment (more on that later).

If you open the deployment template you will see the new JSON Outline window appear.

json outline

I’ll come onto the contents of the template in a moment. For now let’s focus on the JSON Outline. It’s your friend for moving around your template, adding and remove resources. To add a new resource, click the little package icon with a plus on it, top left of the window.

new resource dialog

When you click the icon, the tooling talks to Azure to get the latest versions of resources. The tooling here is intelligent. In the screenshot above you can see I’m adding a Virtual Machine. As a resource, this depends on things like a storage account (to hold the hard drive blobs) and a network. If you already have these defined in your template, they will be listed in the dropdowns. If not, or if you don’t want to use them, you can add new resources and the tooling will step you through answering the questions necessary to specify your resources.

The image below shows the JSON outline after I’ve added a new VM, plus the required storage and network resources. You can see that the tooling has added parameters and variables into the template as well.

json outline with stuff

You can build your template using only the tooling if you like. However, if you want to do something complex or clever you’re going to be hacking this around by hand.

A Few Template Fundamentals

There are a few key points that you need to know about templates and the resources they contain:

  • There is a one to one relationship between a template and a deployment. If you look in the Azure Portal at a Resource Group you will see Last Deployment listed in the Essentials panel at the top of the blade.
    resource blade essentials
    Clicking the link will show the deployments themselves. The history of deployments is kept for a resource group and each deployment can be inspected to see what parameters were specified and what was done.
    deployment history
    deployment details
  • A resource in a template can be specified as being dependent on another resource in the same template. I have tried external dependencies – the templates fail. This is important because you have no control of the execution order of a template other than through dependencies. If you don’t specify any, Azure Resource Manager will try to deploy all the resources in parallel. This is actually a good thing – in the old world of Azure PowerShell it was hard to push out multiple resources in parallel. When you upload a template for deployment, Azure Resource Manager will parse it and work out the deployment order based on the dependencies you prescribe. This means that most deployments will be quicker in the new model.
  • Resources in a template must have unique names. You can only have one resource of a given type with a given name. This is important and has implications for how you achieve certain things.
  • You can nest deployments. What does that mean? You can call a template from another template, passing in parameters. This is really useful. It’s important to remember that template-deployment relationship. If you do nest these things, you’ll see multiple deployments in your Resource Group blade – one per template.
  • If a resource already exists then you can reconfigure it through your template. I’ve not tried this on anything other than a virtual network, but templates define the desired configuration and Azure Resource Manager will try to set it, even it that means changing what’s there already. That’s actually really useful. It means that we can use a nested deployment in our template to reconfigure something part way through our overall deployment.
  • Everything is case-sensitive. This one just keeps on biting me, because I’m a crap typist and the tooling isn’t great at telling me I’ve mistyped something. There’s no IntelliSense in templates yet.

Deploying Your Template to Azure

Right now, deploying your template means using PowerShell to execute the deployment. The New-AzureResourceGroup cmdlet will create a new Resource Group in your subscription. You tell it the name and location of the resource group, the deployment template you want to use, and the values for the template parameters. That last bit can be done in three different ways – take your pick:

  • Using the –TemplateParameterFile switch allows you to specify a JSON-format parameters file that provides the required values.
  • PowerShell allows you to specify the parameters as options on the command. For example, if I have a parameter of AdminUsername in my template I can add the –AdminUsername switch to the command and set my value.
  • You can create an array of the parameters and their values and pass it into the command. Go read up on PowerShell splatting to find out more about this.

Being old, my preference is to use the second option – it means I don’t need to keep updating a parameters file and I can read the command I’m executing more easily. PowerShell ninjas would doubtless prefer choice number three!

The code below shows how I deploy my resource template:

$ResourceGroupName = "tuservtesting1"
$ResourceGroupLocation = "North Europe"
$TemplateFile = "$pwd\DeploymentTemplate.json"
$envPrefix = "myenv"
$adminUsername = "env-admin"
$adminPassword = "MyPassword"
$adminPassword = ConvertTo-SecureString $adminPassword -AsPlainText -Force
$resourceLocation = "North Europe"
$storageAccountType = "Standard_LRS"
$artifactsLocation = "http://mystorage.blob.core.windows.net/templates"


# create a new resource group and deploy our template to it, with our params
New-AzureResourceGroup -Name $ResourceGroupName `
                       -Location $ResourceGroupLocation `
                       -TemplateFile $TemplateFile `
                       -storageAccountType $storageAccountType `
                       -resourceLocation $resourceLocation `
                       -adminUsername $adminUsername `
                       -adminPassword $adminPassword `
                       -envPrefix $envPrefix `
                       -_artifactsLocation $ArtifactsLocation `
                       -_artifactsLocationSasToken $ArtifactsLocationSasToken `
                       -Force -Verbose

I like this approach because I can create scripts that can be used by our TFS Build and Release Management systems that can automatically deploy my environments.

Stuff I’ve Found Out The Hard Way

The environment I’m deploying is complex. If has multiple virtual machines on a shared network. Some of those machines have public IP addresses; most don’t. I need a domain controller, ADFs server and Web Application Proxy (WAP) server and each of those depends on the other, and I need to get files between them. My original template was many hundreds of lines, nearly a hundred variables and half a dozen parameters. It tool over an hour to deploy (if it did) and testing was a nightmare. As a result, I’ve refined my approach to improve readability, testability and deployability:

  • Virtual machine extension resources seem to deploy more reliably if they are within the Virtual Machine markup. No, I don’t know why. You can specify VM extensions at the same level in the template as your Virtual Machines themselves. However, you can choose to declare them in the resources section of the VM itself. My experience is that the latter reliably deploys the VM and extensions. Before I did this I would get random deployment failures of the extensions.
  • Moving the VMs into nested deployments helps readability, testability and reliability. Again, I don’t know why, but my experience is that very large templates suffer random deployment failures. Pulling each VM and it’s linked resources has completely eliminated random failures. I now have a ‘master template’ which creates the core resources (storage account and virtual network in my case) and then nested templates for each VM that contain the VM, the NIC, the VM extensions and, if exposed to the outside world, load balancer and public IP.
    There are pros and cons to this approach. Reliability is a huge pro, with readability a close second – there are far fewer resources and variables to parse. I can also work on a single VM at once, removing the VM from the resource group and re-running the deployment for just that machines – that’s saved me so much time! On the con side, I can’t make resources in one nested deployment depend on those in another. That means I end up deploying my VMs much more in sequence than I necessarily would have otherwise because I can only have one nested deployment depend on another. I can’t get clever and deploy the VMs in parallel but have individual extensions depend on each other to ensure my configuration works. The other con is I have many more files to upload to Azure storage so the deployment can access them – the PowerShell won’t bundle up all the files that are part of a deployment and push them up as a package.
  • Even if you find something useful in a quickstart template, add the resources cleanly through the tooling and then modify. The API moves forwards and a good chunk of the code in the templates is out of date.
  • The JSON tooling doesn’t do much error checking Copy and paste is your friend to make sure things like variable names match.
  • The only way to test this stuff is to deploy it. When the template is uploaded to Azure, Resource Manager parses it for validity before executing the deployment. That’s the only reliable way to check the validity of the template.
  • The only way to see what’s happening with any detail is to use Azure Resource Explorer. With a VM, for example, you can see an InstanceView that shows the current output from the deployment and extensions. I’ll talk more about this when I start documenting each of the VMs in my environment and how I got them working.

Using the customScriptExtension in Azure Resource Templates

Documentation for using the customScriptExtension for Virtual Machines in Azure through Resource Templates is pretty much non-existent at time of writing, and the articles on using it through PowerShell are just plain wrong when it comes to templates. This post is accurate at time of writing and will show you how to deploy PowerShell scripts and resources to an Azure Virtual Machine through a Resource Template.

The code snippet below shows a customScriptExtension pulled from one of my templates.

        {
          "type": "Microsoft.Compute/virtualMachines/extensions",
          "name": "[concat(variables('vmADFSName'),'/adfsScript')]",
          "apiVersion": "2015-05-01-preview",
          "location": "[parameters('resourceLocation')]",
          "dependsOn": [
            "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]",
            "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/ADFSserver')]"
          ],
          "properties": {
            "publisher": "Microsoft.Compute",
            "type": "CustomScriptExtension",
            "typeHandlerVersion": "1.4",
            "settings": {
              "fileUris": [
                "[concat(parameters('_artifactsLocation'),'/AdfsServer.ps1', parameters('_artifactsLocationSasToken'))]",
                "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
                "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
              ],
              "commandToExecute": "[concat('powershell.exe -file AdfsServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'))]"
            }
          }
        }
      ]
    }

The most important part is the commandToExecute. The documentation tells you to simply list the PowerShell script (something.ps1) you want to run. This won’t work at all! All the extension does is shell whatever you put in commandToExecute. The default association for .ps1 is notepad. All that will do is run up an instance of our favourite text editor as the system account, so you can’t see it.

The solution is to build a command line for powershell.exe, as you can see in my example. I am launching powershell.exe and telling it to load the AdfsServer.ps1 script file. I then specify parameters for the script within the command line. There is no option to pass parameters in through the extension itself.

The fileUris settings list the resources I want to push into the VM. This must include the script you want to run, along with any other supporting files/modules etc. The markup in my example loads the files from an Azure storage account. I specify the base url in the _artifactsLocation parameter and pass in a SaS token for the storage in the parameter _artifactsLocationSasToken. You could just put a url to a world-readable location in there are drop the access token param.

The dependsOn setting allows us to tell the extension to wait until other items in the resource template have been deployed. In this case I push two other extensions into the VM first.

Be aware that the process executed by the extension runs as the system account. I found very quickly that if I wanted to do anything useful with my PowerShell, I needed to use invoke-command within my script. To do that I need the admin credentials, which you can see pass into the command line as parameters.

Want to know what’s going on? Come to the Black Marble Tech Update

It’s January, which can only mean one thing. It’s time for the annual Black Marble Tech Update. If you’re an IT or development manager, come along to hear the stuff you need to know about Microsoft’s releases and updates last year and what we know so far about what is coming this year.

Tech Updates are hard work to prepare for, but they’re quite exhilarating to present. In the morning, myself, Andy and Andrew will run through the key moves and changes in the Microsoft ecosystem to help IT managers with their strategic planning: What’s coming out of support, what’s got a new release due; in short, what do you need to pay attention to for your organisation’s IT.

In the afternoon Robert, Richard and Steve will be covering what’s important to know in the Microsoft developer landscape and this year they are joined by Martin Beeby from the Microsoft DX team.

You can find the Tech Update on our website as two events (IT Managers in the morning, Developers in the afternoon). Feel free to register for the one that most interests you. Most people stay for the whole day and leave with their heads buzzing.

Why you should attend a Microsoft IT Camp

I’ve posted before about helping out at the IT Camps run by the Microsoft DX team. I’m a fervent supporter of them – they are hands-on days of technical content run by great people who know their stuff and, importantly, they are run around the country.

Next week will see me in Manchester with Ed Baker for the latest instalment. The current series of campus are two days with the first being around Mobile Device Management and the second around extending your datacentre into Azure. I’ll be there for day two to be Ed’s wingman as we take attendees through hands-on labs around virtual machines and talk about virtual networks, Azure Active directory and pretty much anything else Azure-related we get asked about.

Ed and Andrew really enjoy running the camps – they are a great way to talk to you, the customer, about how you use Microsoft’s services. I’m not backwards in coming forward with feedback about what works and what doesn’t in the MS stack and you shouldn’t be either. Unlike regular events, these are technical days for technical people by technical people and that’s what makes them so rewarding to attend and so satisfying to run. Those of us that are involved want the camps to continue and that means people like you need to turn up, enjoy the day and give feedback.

Go read more about them, get registered and come along. And you don’t even have to travel south to do it!

See you there.

Speaking at SQLBits 2015

We’re only half way through January and this year is already busy. I am really excited to be speaking at SQLBits this year!

SQLBits is a conference that various members of the Black Marble team attend regularly and rave about. It’s the event if you want to gain knowledge and insight into all aspects of databases and data management, reporting, BI and more. Microsoft are a platinum sponsor of the event this year and a whole heap of big names are flying in to present.

Why am I, an Azure MVP, speaking at SQLBits? I’ll be talking about virtual networks and virtual machines in Azure. Whilst not directly DB-related, it’s an important topic these days when designing the architecture for your application. Yes, there is the Azure SQL service, but that’s not always the right choice. Sometimes you want to run your DB on a VM. Sometimes you might even want to access data held securely within your organisation, even if your application is in the cloud. As a DB guy, knowing what options are available to you when designing your solution is important.

The Azure community and the SQL community shouldn’t be separate. Cloud is all pervading, but is not a universal fix all. Hopefully sessions like mine can make connections between the two groups for the benefit of all.

Did I mention how big SQLBits is? Go take a look at the agenda, packed full of great content. The run, don’t walk, to the registration page.

See you there!

Get informed with TechDays Online 2015

Block out your diary from February 3rd until February 5th. The great guys at Microsoft DX are running another TechDays Online event and it’s absolutely worth your time. I had the absolute pleasure to be involved last year and will again this year, both in front of and behind the camera.

For those who don’t know about the event, TechDays Online is three days (and one evening, this year) of technical content delivered by MVPs and Microsoft evangelists across a broad range of topics. Whilst you watch the sessions, streaming live through the power of the internet, you can ask questions in the chat channel. Some of those questions may find their way to the speaker during the session, but all will be picked up by a team of experts backstage, fuelled by caffeine and sugar.

Each day covers a different topic area and is lead by one of the DX team. Tuesday is led by Ed Baker and is around Devices and managing a mobile first world. I’ll be talking about Azure Active Directory, Robert is doing a session on Internet of Things and we are joined by a great line up of MVPs. Tuesday evening is An evening with Office 365; Wednesday sees Andrew Fryer curating The Journey to the cloud-first world; finally on Friday, Martin Beeby is in the chair for Multi-device cross platform development. Richard is involved in that final day.

Content on Tuesday is more relevant for IT pros, Wednesday is a mix of IT Pro and developer content and Thursday is aimed at developers. Having said that, I learned something useful across all the days of the last event and I’d urge you to tune in for all three. Hook a PC up to a screen in the office and stream it in the background for everyone if you can’t dedicate time – you won’t regret it!

In addition to presenting sessions, I will be on the chat stream across all three days, chipping in where I can.

Oh, and did I mention that Mary Jo Foley will be there too?

There’s still plenty of time to register at http://aka.ms/techdays2015 so what are you waiting for?