BM-Bloggers

The blogs of Black Marble staff

Define Once, Deploy Everywhere (Sort of...)

Using Lability, DSC and ARM to define and deploy multi-VM environments

Configuration as code crops up a lot in conversation these days. We are searching for that DevOps Nirvana of a single definition of our environment that we can deploy anywhere.

The solution adopted at Black Marble by myself and my colleagues is not quite that, but it comes close enough to satisfy our needs. This document details the technologies and techniques we adopted to achieve our goal, which sounds simple, right?

I want to be able to deploy a collection of virtual machines to my own computer using Hyper-V, to Dev/Test Labs in Azure, and to Azure Stack, using the same description of those virtual machines and their configuration.

Defining Our Platforms

Right now, we use Lab Manager (part of Team Foundation Server) at Black Marble to manage multi-VM environments for testing, hosted on a number of servers managed by System Center Virtual Machine Manager. Those labs are composed of virtual machines that can also be deployed to a developer’s workstation.

The issue is that those environments are pre-built – the machines are configured and the environment saved as a whole. They must be patched when a new lab is created from the stored ‘template’ VMs and adding a new machine to the lab is a pain.

Lab Manager itself is now a end-of-life, so we are looking at alternatives (including Azure Stack – see below).

Microsoft Azure

We already use Azure to host virtual machines. However, even with the lower cost Dev/Test subscription type, running lots of machines in the public cloud can get very expensive.

Azure Dev/Test Labs helps to mitigate this cost issue somewhat by providing a governance wrapper. I can create a Lab and apply rules, such as what types of virtual machine can be created, and automatically shut down running VMs at a set time to limit costs.

Within Azure we use Azure Resource Templates, which are JSON declarations of the services we require, to deploy our virtual machines. Once running, we have extensions that can be injected into a VM and used to execute scripts to configure them. With Windows servers, that means using the Desired State Configuration (DSC) extension.

Dev/Test labs allows me to connect to a Git repository of artefacts. Those artefacts could be items I wish to install into a VM, but they can also be ARM templates to deploy complex environments of multiple VMs. Those ARM templates can then apply DSC configuration definitions to the VMs themselves.

Microsoft Azure Stack

Stack is coming soon. Right now, you can download a Technical Preview that runs on a single machine. Stack is aimed at organisations that have stuff they cannot put in the public cloud, for whatever reason, but want a consistent approach to their development that can span private and public cloud. The final form of Stack is expected to be similar to the current Cloud Platform Solution (CPS), which is way out of my budget. However, the POC runs on a server very close in specification and price point to my existing Lab Manager-controlled servers.

Stack aims to deliver parity with its public cloud older brother. That means that I can use the same ARM templates I use in Azure to deploy my IaaS services on Stack. I have the same DSC extension to inject my configuration, too.

What I don’t have right now on Stack (and it’s unclear what the final product will bring, so I won’t speculate) are the base operating system images that are provided by Microsoft in Azure. I can, however, create my own images and upload them to the internal Stack equivalent of the Azure Marketplace.

Hyper-V

On our desktops, laptops, and servers we use Hyper-V, Microsoft’s virtualisation technology. This offers some parity with Azure – it uses the same VHD disk file format, for example. I don’t get the same complex software-defined-networking but I still get virtual switches to which I can connect machines, and they can be private, internal, or external.

Private switches do what they say on the tin: They are a bubble within which my VMs can communicate with each other but not with the outside world. I can, therefore, have multiple identical bubbles all using the same IP address ranges without issue.

External switches are connected directly to a network adapter on the host. That’s really useful if I need to host servers that deliver services to my organisation, as I need to communicate with them directly. This is great on servers, and is useful on developer workstations with physical NICs. On laptops, however, it gets tricky if you’re using a WiFi network. Those were never designed with VMs in mind, and the way Windows connects an external switch to a wireless adapter is, quite frankly, a horrible kludge and I’ve always found it terribly unreliable.

Internal switches create a new virtual NIC on the host so it can communicate directly with VMs on the network. In Windows 10, we can use an internal switch alongside a NetNat, which allows Windows 10 to provide network address translation for the virtual network. This gives us a setup like your home internet – VMs can communicate out but there are no direct inbound connection allowed (yes, I know you can create NAT publishing rules too, but that’s not a topic for here).

One cool thing about a NetNat is that if you carefully define your IP address ranges, a single NetNat can pass traffic into the networks generated by multiple virtual switches. This allows me to have multiple environments that can coexist on separate subnets.

Lability

I’ve saved this until last because it’s sort of the secret sauce in what we’ve been working on. I stumbled on Lability totally by chance, and random internet searching. It’s an open source solution to defining and deploying VMs on Windows using DSC to declare both the configuration of the environment (the VMs and their settings) and the VMs themselves (the guest OS configuration).

Lability was created by a chap called Iain Brighton and he deserves a great deal of credit for what he’s built.

With Lability, I can use the same DSC configurations that I created for my Azure deployments. I can use the same base VHD images that I need for my Azure Stack Deployments. Lability uses a DSC PowerShell file (.ps1), which can include configurations for multiple nodes – each of the VMs in our environment. It then uses a PowerShell Data file (.psd1) to declare the configuration of the VMs themselves (CPU, RAM, virtual switch etc) as well as pass in configuration details to the DSC file.

If you look at the Lability repo on GitHub you will find links to some excellent articles by people who have used Lability and take you through setting up your Lability Host (your computer) and your first environment.

Identifying Differences

Applying DSC

Lability and the Azure DSC extension work in a subtly but importantly different manner. When you create a DSC configuration, you write a PowerShell configuration which imports DSC Resources that will do the actual configuration work and you call those resources with specified values that declare the state of the configuration you want. Within that PowerShell file you can put functions that figure out some of those values.

When you execute the PowerShell configuration, it runs through that script and generates a MOF file. That file is submitted to the DSC engine on the machine that you are configuring and used to pass parameters into the DSC Resources that are going to execute commands to apply your configuration.

When you use the DSC extension in Azure, it installs the necessary DSC resources on the VM and executes the PowerShell file on that machine, generating the MOF which is then applied.

When you use Lability, the PowerShell file is executed on the host machine and outputs the MOF files – you do this manually before executing a Lability command to create a new lab. Lability then takes care of injecting the MOF and the required DSC resources into the virtual machine, where the configuration is applied.

This is a critical difference! If you look at the examples in the Azure Quickstart Repo, all the DSC is written assuming that it is executed on the host, and uses PowerShell functions to do things like finding the network adapter, or the host IP address etc. If you look at the examples used in Lability labs, the data file provides many of those pieces of information. If you run the PowerShell from an Azure QuickStart template you’ll have some crazy failures, because all those functions execute on the host and therefore get totally incorrect information to pass to the configuration code.

Additionally, none of the Azure examples use a data file to provide configuration data. You might think this is because the data file is not supported. However, this is not true – you can pass a data file in using the DSC extension. Lability makes heavy use of that data file to define our environment.

Networking

In Azure, you cannot set a static IP address from within the VM itself. The networking fabric hands the machine its IP address via DHCP. You can set that IP to be static through the Azure fabric, but not through the VM. That might mean that we don’t know the IP address of a machine before we deploy it.

With Lability, we declare the IP address of the VM in the DSC data file. We could use a DHCP server running on the host, and I do just that myself, but it’s more stuff to install and manage, and for our approach to labs right now we’ve stuck to declaring address in the DSC data file.

We also have additional stuff to think about in Azure – public IP addresses, Network Security Groups and possibly User Defined Routing that controls how (and if) we allow inbound traffic from the internet onto our network, what can talk to what and on which ports within our network, and whether we want to push all traffic through appliances for security.

Azure API Versions

When you write an ARM template to define and deploy your services, each of the resources in that template is defined against a versioned API. You specify which API version you are using in the template, and different resource providers have different versions.

Azure Stack dos not have all the same versions of the various APIs that are in Azure. Ironically, whilst I have had to make few changes to existing ARM templates in terms of their content in order to successfully use them on Stack, I’ve had to change almost every API version referenced in them. Having said that, I am finding that the API versions I reference for Stack by and large work unchanged if I throw the template at Azure.

Declaring Specific Goals

We’ve discussed our target platforms and talked about how those differ in terms of our deployment configurations. Let’s talk about what our aims were as we embarked on our project to manage VM labs:

  1. All labs should deploy from greenfield. One of our biggest pain points with our old approach was that our labs were built as a collection of VMs. We couldn’t change the name of the AD domain; changing IP address was complex; adding new VMs was painful; patching a ‘new’ environment could take hours.
    We were very clear that we wanted to create all new labs from base media which we would try to keep current for patches (at least within a few months) and would allow us to create any number of machines and environments.
  2. There should be one configuration for each guest VM, which would be used everywhere. We were very clear that we would create one DSC configuration for each role that we needed (for example, a Domain Controller or an ADFS server) and that configuration would be used whether we were creating a lab on a local machine, in Azure or Azure Stack.
  3. Maintain a distinction between a virtual machine configuration and an environment configuration. We are building a collection of virtual Lego with our VM configurations. Our teams can combine those Lego bricks into environments that may be project specific. There should be a configuration for those environments. We should never alter an existing configuration for a new environment – we should create a new configuration using the existing one as a base (for example, we need additional roles on our DC for some reason).
  4. Take a common approach with Lability and Azure, whilst accepting we have to maintain two sets of resources.
    Our approach to Azure environments is already modular. We have templates for VMs that are combined into environments through Nested Deployments. This would not change. Our VM definitions would encompass a DSC configuration and an ARM template. Our environments would include both a DSC data file and an ARM template.
  5. Manage and automated the creation of base media. We would need a variety of base VHD files, analogous to the existing marketplace images in Azure: Windows Server (numerous versions), SQL Server, SharePoint, etc. Each of these must be created using scripts so they could be periodically rebuilt to achieve our goal of avoiding time consuming patching of new environments. In short, we would need an Image Factory.
  6. Setup and use should be straightforward. We need our developers to be able to install all the tooling and get a new lab up and running quickly. We need easy integration with Azure Dev/Test Labs, etc. This would need some process automation around the build and release of the VM configurations and anything else we would create as part of the project.

Things You Will Need

If you want to build the same Lab solution as we did you’re going to need a few things:

  1. Git Repository. All the code and configurations we create are ultimately stored in a central Git Repo. We are using Visual Studio Team Services, as it’s our chosen source control platform.
    Why Git? Two reasons: First of all, it allows us to easily deploy our solution to a developer workstation by simply cloning the repo. Second, Azure DevTest Labs needs a Git Repo to store Artifacts (our ARM templates) for deployment of environments.
  2. Build/Release automation. When we commit to our shared repo, our Build server executes some PowerShell to create deployment artifacts for Azure. It creates Zip archives from our configurations to be used with the DSC extension. It makes no sense to create these by hand and waste space in our repo. Our Release pipeline then automatically pushes our artifacts to an Azure storage account that can be accessed by our developers as a single, central store for VM configurations.
  3. Private PowerShell Repository. We use ProGet to provide a local Nuget/PowerShell/NPM etc repository. We had this in place before we started this project, but it has proved invaluable. The simple reason is that we want to publish DSC Resources or easy consumption and installation by our team. You be surprised at how many times we’ve hit a bug in a DSC resource which has been fixed in the source code repo but a new version has not yet been published. Maintaining our own repository allows us to publish our own versions of DSC resources (and in some case our own bespoke resources).
  4. A server to host your Image Factory. I’m not going to spend time documenting this part of our solution. Far cleverer people than I have written about this and we followed their guidance. You need somewhere to host your images and run the scripts on a schedule to build new ones. Our builds run overnight and we place images on a Windows fileshare.
  5. An Azure subscription. If you want to use the same configuration for on-prem and cloud, saying that you need and Azure sub seems a little obvious. However, we are using nested deployments. These use resources that must be accessible to the Azure fabric at deploy time, and the easiest way to do that is to use Azure Storage. You’ll also need a subscription to host your DevTest lab if that’s your preferred approach. Note that you could have multiple subscriptions – our devs can use their MSDN Azure Benefit to host environments within their own DevTest lab, whilst the artefact store is on a corporate subscription and the artefact repo is in our VSTS.
  6. A code editor that understands PowerShell, DSC and ARM. I prefer Visual Studio and the Azure SDK, but Visual Studio Code is an equally powerful tool for creating and managing the files we are going to use.

Managing our VMs and Environments

After much thought, we came up with a standard folder structure and approach to our VM and environment configurations and the supporting scripts needed to deploy them.

In our code repo we have a the following folder structure:

\Environments

This folder contains a series of folders, one per environment.

This folder is specified as that containing environment templates when the shared repo is connected to an Azure DevTest Lab

\Environment\MyEnv1

An environment folder contains three files:

\Environment\MyEnv1\MyEnv1.psd1

The psd1 data file must share the same name as the folder. This contains all the configuration settings for all VMs in our environment and is used by Lability and the VM DSC configs

\Environment\MyEnv1\azuredeploy.json

For DevTest labs, the environment template used in Azure must be named azuredeploy.json. This template calls a series of other templates to deploy the virtual network and VMs to Azure

\Environment\MyEnv1\metadata.json

This file is read by DevTest labs and provides a name and description for our environment

\VMs

This folder contains subfolders for each of our component Virtual Machines.

\VMs\MyVM1

A VM folder contains at least two files:

\VMs\MyVM1\MyVM1.ps1

The ps1 configuration file must share the same name as the folder. It contains the DSC PowerShell to apply the configuration to the guest VM

\VMs\MyVM1\MyVM1.json

The json file shares the folder name for consistency. It is called by the azuredeploy.json environment template to create the VM in Azure and Azure Stack

\Modules

The Modules folder contains shared code of various types

\Modules\Scripts

The scripts folder contains PowerShell scripts to install and configure our standard Lability deploy, wrapper the Lability create and remove commands and perform build and release tasks.

\Modules\Template

The template folder holds common ARM templates that create standard elements shared between environments and called by the azuredeploy.json

\Modules\DSC

This folder is used during the build process. All the DSC resources needed in an environment are downloaded to this folder. A script parses the VM DSC configurations called by an environment and creates Zip files to be uploaded into Azure storage that contain the correct DSC resources and DSC PowerShell for an environment

Wrapper Scripts for Lability

Lability is great but is built to work in a certain way. We have three scripts that perform key functions for our deployment.

Install Script

Our installation script performs the following function:

  1. Creates the C:\Virtualisation base folder we use to store VMs and the Lability working files.
  2. Sets the default Hyper-V locations for Virtual Machines and Virtual Hard disks to c:\Virtualisation
  3. Creates a new Internal Virtual Switch (named in accordance to our convention) and sets the IP address on the NIC created on the host to the required one. Our first switch creates a network of 192.168.254.0/24 and the host gets 192.168.254.1 as it’s IP address.
  4. Creates a new NetNat with an internal address prefix of 192.168.224.0/19. This will pass traffic into and out of up to thirty /24 subnets starting at 192.168.224.0/24, up to 192.168.254.0/24. We decided to work from the top down when creating new networks.
  5. Makes sure that the Nuget package provider is installed and registers our ProGet server as a new PowerShell repository. We then remove the default PowerShellGallery registration and make sure our repo is trusted.
  6. Check to see if Lability is installed and if not, we install it using Install-Module.
  7. Set the following Lability defaults using the Set-LabHostDefault command:
    ConfigurationPath: c:\Virtualisation\Configuration
    IsoPath: c:\Virtualisation\ISOs
    ParentVhdPath: c:\Virtualisation\MasterVirtualHardDisks
    DifferencingVhdPath: c:\Virtualisation\VMVirtualHardDisks
    ModuleCachePath: c:\Virtualisation\Modules
    ResourcePath: c:\Virtualisation\Resources
    HotfixPath: c:\Virtualisation\Hotfix
    RepositoryUri: <the URI of our ProGet Server, e.g. https://proget.mycorp.com/nuget/PowerShell/package>
  8. Set the default virtual switch for Lability environments to our newly created one using the Set-LabVMDefault command.
  9. Register our VHD base media by calling another script which loads a standard configuration data file. This is separate so we can perform this action independently.
  10. Set the Lability default media to our Windows Server 2012 R2 standard VDH using the Set-LabVMDefault command.
  11. Initialise Lability using our configuration with the Start-LabHostConfiguration command.

Once the install script has completed we have a fully configured host ready to deploy Lability labs.

Deploy-LocalLab script

Lability has a Start-LabConfiguration command which reads the psd1 configuration data file for an environment and creates the VMs. Before running that, however, you need to execute the PowerShell DSC scripts to generate the MOF files for each VM. Lability injects those, and the DSC resources, into the VMs. A second command, Start-Lab boot the VMs themselves, respecting boot order and delays that can be declared in the config file.

This is great unless you have a complex lab and need lots of DSC resources to make it work. Our wrapper script does the following, taking an environment name as a parameter:

  1. Reads the psd1 data file for our environment from the correct folder to identify the DSC resources we need (they are listed for Lability). It installs these resources so we can execute the PowerShell configuration scripts and generate the MOFs.
  2. Reads the psd1 data file to identify the VMs we are deploying. Based on the Role information in that file it will execute each of the configuration ps1 files from the VMs folder hierarchy, passing in the psd1 data file. The resultant MOFs get saved in the Lability configuration folder (c:\Virtualisation\Lability).
  3. Execute the Start-LabConfiguration command passing in the configuration data file.
  4. If we specify a -Start switch, the script starts the lab with the Start-Lab command.

Remove-LocalLab script

Our remove script takes the name of our environment as a parameter. It does the following:

  1. Identifies the VMs in the lab using the Get-LabVM command, passing in the psd1 data file. Check to see if any are running and if they are call the Stop-Lab command.
  2. Executes the Remove-LabConfiguration command, passing in the psd1 data file for the environment.

Virtual Machine Configuration

We’ve challenged ourselves to only use Desired State Configuration for our VMs. This has been a big change from our previous approach to Azure VMs, which mixed DSC with custom PowerShell scripts deployed with a separate Azure VM extension. This has raised four issues we had to solve:

  1. The list of DSC Resources is growing but not all-encompassing. There are many areas where no DSC modules exist. To overcome this, we have used a mix of SetScript code contained within a DSC configuration (which has some limitations) and bespoke DSC modules hosted in our ProGet repository.
  2. Existing Published DSC resources may contain bugs. In many cases code fixing those bugs has been supplied as pull requests but may be undergoing review, and sometimes no new release of the resource has been created. We now have our own separate code repository for DSC resources (including our own) where we keep these and we publish versions to our own repository. When a new official version including the fixes is released it will supersede our own.
  3. There are some good DSC resources out there on GitHub that aren’t published to the PowerShell gallery. We publish these into our own repository for access.
  4. Azure executes the DSC on the target VM to generate the MOF. Lability executes it on the host machine. That and other differences means that we have wrapper code to switch the config sections, mostly based on an input parameter named IsAzure. When called from the Azure DSC extension we specify that parameter and on a Lability host we don’t. I realise that purists will argue that this means we don’t really have a single configuration. I would counter that I have a single configuration file and therefore one thing to maintain. I don’t see any issue with logic inside that config deciding what happens.

Sample Configuration

Let’s illustrate our approach with an extract from a configuration. The code below is part of our DomainController config.

The config accepts some parameters. EnvPrefix is used to generate names within the environment. In Azure we use it to prefix our Azure resources. Within the environment it’s used to create things like the AD domain name. IsAzure tells the config whether it is being executed on the host or on the target VM inside Azure.

You’ll notice that we specify the DSC module versions. There are a few reasons why we do this – because some of the DSC resources are unofficial we want to make sure they come from our repository, and the way Lability downloads DSC resources from our ProGet Server means we need to specify a version number. Either way, we benefit from increased consistency – there have been some breaking changes between versions with the official DSC resources in the PowerShell Gallery!

If we’re in Azure we do things like find the network adapter through code and we don’t specify network addresses. We use the IsAzure parameter to wrapper this stuff in If blocks.

The configuration values come from the psd1 data file, regardless of whether we deploy to Azure or locally. We do this to enforce consistency. Even though we probably could have the Azure config self-contained in the script, we don’t.

 

Configuration DomainController {

    param(
        [ValidateNotNull()]
        [System.Management.Automation.PSCredential]$Credential,

        [string]$EnvPrefix,

        [bool]$IsAzure = $false,

        [Int]$RetryCount = 20,
        [Int]$RetryIntervalSec = 30
    )

    Import-DscResource -ModuleName @{ModuleName="xNetworking";ModuleVersion="3.2.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xPSDesiredStateConfiguration";ModuleVersion="6.0.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xActiveDirectory";ModuleVersion="2.16.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xAdcsDeployment";ModuleVersion="1.1.0.0"}
    Import-DscResource -ModuleName @{ModuleName="xComputerManagement";ModuleVersion="1.9.0.0"}

    $DomainName = $EnvPrefix + ".local"

    Write-Verbose "Processing Configuration DomainController"

    Write-Verbose "Processing configuration: Node DomainController"
    node $AllNodes.where({$_.Role -eq 'DomainController'}).NodeName {
        Write-Verbose "Processing Node: $($node.NodeName)"

        if ($IsAzure -eq $true) {
            #Find the first network adapter
            $Interface = Get-NetAdapter | Where-Object Name -Like "Ethernet*" | Select-Object -First 1
            $InterfaceAlias = $($Interface.Name)
        }
        
        LocalConfigurationManager {
            RebootNodeIfNeeded = $true;
            AllowModuleOverwrite = $true;
            ConfigurationMode = 'ApplyOnly'
            CertificateID = $node.Thumbprint;
            DebugMode = 'All';
        }

        #ignore this is in Azure
        if ($IsAzure -eq $false) {
            # Set a fixed IP address if the config specifies one
            if ($node.IPaddress) {
                xIPAddress PrimaryIPAddress {
                    IPAddress = $node.IPAddress;
                    InterfaceAlias = $node.InterfaceAlias;
                    PrefixLength = $node.PrefixLength;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }


        #ignore this is in Azure
        if ($IsAzure -eq $false) {
            # Set a default gateway if the config specifies one
            if ($node.DefaultGateway){
                xDefaultGatewayAddress DefaultGateway {
                    InterfaceAlias = $node.InterfaceAlias;
                    Address = $node.DefaultGateway;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }

        # Set the DNS server if the config specifies one
        if ($IsAzure -eq $true) {
            if ($node.DnsAddress){
                xDNSServerAddress DNSaddress {
                    Address = $node.DnsAddress;
                    InterfaceAlias = $InterfaceAlias;
                    AddressFamily = $node.AddressFamily;
                }
            }
        } 
        else {
            if ($node.DnsAddress){
                xDNSServerAddress DNSaddress {
                    Address = $node.DnsAddress;
                    InterfaceAlias = $node.InterfaceAlias;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }
            
    }

#End configuration DomainController
}

Sample Data File

Below is a sample data file for an environment containing a Domain Controller and single domain-joined server. Note that the data file contains a mix of data to be processed by the DSC configuration and Lability-specific information that defines the environment, including VM settings and the required DSC resources. When we deploy the lab locally, Lability processes the file to create the Virtual Machines and their hard disks (and create new virtual switches if we declare them). When we deploy in Azure this information is ignored – we can safely use the same data file in both situations.

# Single Domain Controller Lab

@{
    AllNodes = @(
        @{
            # DomainController
            NodeName = "DC";
            Role = 'DomainController';
            DSdrive = 'C:';
            
            #Prevent credential error messages
            PSDscAllowPlainTextPassword = $true;
            PSDscAllowDomainUser = $true;


            # Networking
            IPAddress = '192.168.254.2';
            DnsAddress = '127.0.0.1';
            DefaultGateway = '192.168.254.1';
            PrefixLength = 24;
            AddressFamily = 'IPv4';
            DnsConnectionSuffix = 'lab.local';
            InterfaceAlias = 'Ethernet';


            # Lability extras
            Lability_Media = 'BM_Server_2012_R2_Standard_x64';
            Lability_ProcessorCount = 2;
            Lability_StartupMemory = 2GB;
            Lability_MinimumMemory = 1GB;
            Lability_MaximumMemory = 3GB;
            Lability_BootOrder = 0;
            Lability_BootDelay = 600;
        };
        @{
            # MemberServer
            NodeName = "SR01";
            Role = 'MemberServer';
            DSdrive = 'C:';
            
            #Prevent credential error messages
            PSDscAllowPlainTextPassword = $true;
            PSDscAllowDomainUser = $true;


            # Networking
            IPAddress = '192.168.254.3';
            DnsAddress = '192.168.254.2';
            DefaultGateway = '192.168.254.1';
            PrefixLength = 24;
            AddressFamily = 'IPv4';
            DnsConnectionSuffix = 'lab.local';
            InterfaceAlias = 'Ethernet';


            # Lability extras
            Lability_Media = 'BM_Server_2012_R2_Standard_x64';
            Lability_ProcessorCount = 2;
            Lability_StartupMemory = 2GB;
            Lability_MinimumMemory = 1GB;
            Lability_MaximumMemory = 3GB;
            Lability_BootOrder = 1;
        };

    );

    NonNodeData = @{
        OrganisationName = 'Lab';

        Lability = @{
            EnvironmentPrefix = 'Lab-';

            DSCResource = @(
                @{ Name = 'xNetworking'; RequiredVersion = '3.2.0.0';}
                @{ Name = 'xPSDesiredStateConfiguration'; RequiredVersion = '6.0.0.0';}
                @{ Name = 'xActiveDirectory'; RequiredVersion = '2.16.0.0';}
                @{ Name = 'xAdcsDeployment'; RequiredVersion = '1.1.0.0';}
                @{ Name = 'xComputerManagement'; RequiredVersion = '1.9.0.0';}
            );
        }

    };
};

Azure DSC Extension

Our Azure deployment uses the configuration and data file to configure the VM. The JSON for the DSC extension is shown below. Notice the following:

1. The modulesUrl setting specifies a Zip file that contains the DSC resources and configuration ps1 file. We create these zip files as part of our build process and upload them to an Azure storage account.

2. The configurationFunction setting specifies the name of the ps1 file to execute and the configuration within that we want to apply (a single file can contain more than one configuration, although ours don’t).

3. We pass in the EnvPrefix variable and set the IsAzure value to 1 so our configuration executes the right code.

4. The dataBlobUri within protectedSettings is our psd1 data file. The extension treats this as containing sensitive information – things held in this section are not displayed in any output from Azure Resource Manager.

In fairness, whilst at the moment we create JSON specific to each VM, I plan to refactor this to be common code that takes parameters rather than having an ARM template for each VM’s DSC.

      {
        "name": "[concat(parameters('envPrefix'),parameters('vmName'),'/',parameters('envPrefix'),parameters('vmName'),'dsc')]",
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "location": "[parameters('VirtualNetwork').Location]",
        "apiVersion": "[parameters('ApiVersion').VirtualMachine]",
        "dependsOn": [
        ],
        "tags": {
          "displayName": "DomainController"
        },
        "properties": {
          "publisher": "Microsoft.Powershell",
          "type": "DSC",
          "typeHandlerVersion": "2.1",
          "autoUpgradeMinorVersion": true,
          "settings": {
            "modulesUrl": "[concat(parameters('artifactsLocation'), '/Environments/', parameters('envConfig'),'/',parameters('envConfig'),'.zip', parameters('artifactsSasToken'))]",
            "configurationFunction": "DomainController.ps1\\DomainController",
            "properties": {
              "EnvPrefix": "[parameters('EnvPrefix')]",
              "Credential": {
                "userName": "[parameters('adminUsername')]",
                "password": "PrivateSettingsRef:adminPassword"
              },
              "IsAzure": 1
            }
          },
          "protectedSettings": {
            "dataBlobUri": "[concat(parameters('artifactsLocation'), '/Environments/', parameters('envConfig'), '/', parameters('envConfig'),'.psd1', parameters('artifactsSasToken'))]",
            "Items": {
              "adminPassword": "[parameters('adminPassword')]"
            }
          }
        }
      }

We don’t include the DSC extension within the ARM template that deploys the VM because by doing so we can sequence the deployment of configuration to deal with dependencies between servers.

Azure ARM Templates

The approach we take to deploying VMs in Azure has been consistent for some time now. My ResourceTemplates Repo in GitHub uses nested templates to deploy a three-server environment and we use exactly the same approach here. Our ‘master template’ is stored in the environment folder and it calls nested deploys for each VM, VM DSC extension and supporting stuff such as virtual networks. The VM and DSC templates are stored in the VM folder with the DSC config, and the supporting templates are in our Modules\Templates folder since they are shared.

Conclusion

This has been a very long article without a great deal of code in it. I hope this explains how we approach our environment definition and deployment. I plan to do more posts that document more specific elements of a configuration or an environment.

Ultimately, I’m not sure that the goal of a single definition that covers multiple platforms and both host and guest configurations exists. However, I think we’ve got pretty close with our solution and it has minimal rework involved, particularly once you have built up a good library of VM configs that you can combine into an environment.

I should also point out that we are not installing apps – we are deploying a platform onto which our developers and testers can then install the applications they develop. This means that we keep the environments quite generic. Deployment of apps is still scripted (and probably uses VSTS Release Management) but is not included in the configurations we build. Having said that, there is nothing stopping a team extending the DSC to deploy their applications and thus build a more bespoke definition.

I’ve spoken to quite a few people about what we’ve done over the past few weeks and, certainly within the Microsoft space many people want to do what we have done, but few were aware that tooling such as Lability and DSC were available to get it done. I hope this goes some way to plugging that gap.

My Resource Templates from demos are now on GitHub

I’ve had a number of people ask me if I can share the templates I use in my Resource Template sessions at conferences. It’s taken me a while to find the time, but I have created a repo on GitHub and there is a new Visual Studio solution and deployment project with my code.

One very nice feature that this has enabled me to provide is the same ‘Deploy to Azure’ button as you’ll find in the Azure Quickstart Templates. This meant a few changes to the templates – it turns out that Github is case sensitive for file requests, for example, whilst Azure Storage isn’t. The end result is that you can try out my templates in your own subscription directly from Github!

Optimising IaaS deployments in Azure Resource Templates

Unlike most of my recent posts this one won’t have code in it. Instead I want to talk about concepts and how you should look long and hard at your templates to optimise deployment.

In my previous articles I’ve talked about how nested deployments can help apply sensible structure to your deployments. I’ve also talked about things I’ve learned around what will successfully deploy and what will give errors. Nested deployments are still key, but the continuous cycle of improvements in Azure means I can change my information somewhat around what works well and what is likely to fail. Importantly, that change allows us to drastically improve our deployment time if we have lots of virtual machines.

I’d previously found that unless I nested the extensions for a VM within the JSON of the virtual machine itself, I got lots of random deployment errors. I am happy to now report that situation has improved. The result of that improvement is that we can now separate out the extensions deployed to a virtual machines from the machine itself. That separates the configuration of the VM, which for complex environments almost certainly has a prescribed sequence, from the deployment of  the VM, which almost certainly doesn’t.

To give you a tacit example, in the latest work at Black Marble we are deploying a multi-server environment (DC, ADFS, WAP, SQL, BizTalk, Service Bus and two IIS servers) where we deploy the VMs and configure them. With my original approach, hard-fought to achieve a reliable deploy, each VM was pushed and fully configured in the necessary sequence, domain controller first.

With our new approach we can deploy all eight VMs in that environment simultaneously. We have moved our DSC and Custom Script extensions into separate resource templates and that has allowed some clever sequencing to drastically shorten the time to deploy the environment (currently around fifty minutes!).

We did this by carefully looking at what each step was doing and really focusing on the dependencies:

  • The domain controller VM created a new virtual machine. The DSC extension then installed domain services and certificate services and created the domain. The custom script then created some certificated.
  • The ADFS VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured ADFS.
  • The WAP VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured the proxy for the configured ADFS service.

Hopefully you can see what we saw: Each machine had three phases of configuration and the dependencies were different, giving us three separate sequences:

  1. The VM creations are completely independent. We could do those in parallel to save time.
  2. The DSC configuration for the DC has to be done first, to create the domain. However, the ADFS and WAP servers have DSC that are independent, so we could do those in parallel too.
  3. The custom script configurations have a definite sequence (DC – ADFS – WAP) and the DC script depends on the DC having run it’s DSC configuration first so we have our certificate services.

Once we’ve identified our work streams it’s a simple matter of declaring the dependencies in our JSON.

Top tip: It’s a good idea to list all the dependencies for each resource. Even though the Azure Resource Manager will infer the dependency chain when it parses the template, it’s much easier for humans to look at a full list in each resource to figure out what’s going on.

The end result of this tinkering? We cut our deployment time in half. The really cool bit is that adding more VMs doesn’t add much time to our deploy as it’s the creation of the virtual machines that tends to take longest.

Using References and Outputs in Azure Resource Templates

As you work more with Azure Resource Templates you will find that you need to pass information from one resource you have created into another. This is fine if you had the information to begin with within your variables and parameters, but what if it’s something you cannot know before deploy, such as the dynamic IP address of your new VM, or the FQDN of your new public IP address for your service?

The answer is to use References to access properties of other resources within your template. However, if you need to get information between templates then you also need to look at outputs.

A crucial tool in this process is the Azure Resource Explorer (also now available within the Azure Portal – click Browse and look for Resource Explorer) because most often you will need to look at the JSON for your provisioned resource in order to find the specific property you seek.

In the JSON below I am passing the value of the current IP address of the NIC attached to a virtual machine into a nested template as a parameter.

"ipAddress": {
    "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]"
}

The markup looks complex but isn’t really. The concat bit is building the name of the resource, which I do based on parameters within the resource template. Basically, you specify reference in the same way as you would variable or parameter. You then need to provide the name of the resource you want to reference (the concat markup here, but it could just be ‘mynic’) and then the property you want, using dot notation to work your way down the object tree.

I’ve used the example above for a reason because it covers all the bases you might hit:

  1. When you look at the JSON for the deployed resource you will see a properties section (just as you do in your template). You don’t need to include this in your reference (i.e. mynic.<the property I want>, not mynic.properties.<the property I want>).
  2. My nic can have multiple IP assignments – ipConfigurations is an array – so I am using [0] to look in the first item in that array.
  3. Within the ipConfiguration is another properties object. This time I need to include it in the markup.
  4. Within the properties of the ipConfiguration is an attribute called privateIPAddress, so I specify this.

It is important to remember that I can only use reference to access resources defined within my current template.

So what if I want to pass a value back out of my current template to the one I called it with? That’s what the Outputs section of my template is for, and by and large everything in there will be a reference to a property of a resource the current template has deployed. In the code below I am passing the same IP address back out of my template:

"outputs": {
    "ipAddress": {
        "value": "[reference(concat(parameters('envPrefix'),parameters('vmName'),'nic')).ipConfigurations[0].properties.privateIPAddress]",
        "type": "string"
    }
}

Within my parent template I access that output by using the reference keyword again, this time referencing an output from the template resource. In the example below I am passing the IP address from my domain controller template into another nested deployment that will reconfigure my virtual network.

"parameters": {
    "VirtualNetwork": {
        "value": "[variables('VirtualNetwork')]"
    },
    "DNSaddress": {
        "value": "[reference('DomainController').outputs.ipAddress.value]"
    }
}
Note that this markup requires me to specify .value on the end of the reference to pass the information correctly.

References and outputs are important because they allow you to pass information between resources and nested deployments. They allow you to keep your variable count low and understandable, and your templates small and well defined with nested deployments for complex environments.

Using Objects in Azure Resource Templates

Over the past few weeks I’ve been refactoring and improving the templates that I have been creating for Black Marble to deploy environments in Azure. This is the first post of a few talking about some of the more advanced stuff I’m now doing.

You will remember from my previous posts that within an Azure Resource Template you can define parameters and variables, then use those for the configuration values within your resources. I was finding after a while that the sheer number of parameters and variables I had made the templates hard to read and understand. This was particularly true when my colleagues started to work with thee templates.

The solution I decided on was to collect individual parameters and variables into objects. These allow structures of information to be passed into and within a template. Importantly for me, this approach significantly reduces the number of items listed within the variables and parameters sections of my template, making them easier to read and understand.

Creating objects within the JSON is easy. You can simply declare variables within a hierarchy in your JSON. This is similar to using arrays, but each property can be individually references. Below is a sample from the variables section of my current deployment template:

"VirtualNetwork": {
   "Name": "[concat(parameters('envPrefix'), 'network')]",
   "Location": "[parameters('envLocation')]",
   "Prefix": "192.168.0.0/16",
   "Subnet1Name": "Subnet-1",
   "Subnet1Prefix": "192.168.1.0/24"
},

When passing this into a nested deployment I can simply push the entire object via the parameters block of the nested deployment JSON:
"parameters": {
    "VirtualNetwork": {
        "value": "[variables('VirtualNetwork')]"
    },
    "StorageAccount": {
        "value": "[variables('StorageAccount')]"
    }
}

Within the target template I declare the parameter to be of type Object:

"VirtualNetwork": {
  "type": "object",
  "metadata": {
    "description": "object containing virtual network params"
  }
}

Then to reference an individual property I specify it after the parameter itself using dot notation for the hierarchy of properties:

"subnets": [
  {
    "name": "[parameters('VirtualNetwork').Subnet1Name]",
    "properties": {
      "addressPrefix": "[parameters('VirtualNetwork').Subnet1Prefix]"
    }
  }
]
The end result is a much better structure to my templates, where I am passing blocks of related information around. It’s easier to read, understand and debug.

Useful links from The ART of Modern Azure Deployments

Within a few days of each other I spoke about Azure Resource Templates at both DDDNorth 2015 and Integration Mondays run by the Integration User Group. I’d like to thank all of you who attended both and have been very kind in your feedback afterwards.

As promised, this post contains the useful links from my final slide.

I’ve already written posts on much of the content covered in my talk. However, since I’m currently sat on a transatlantic flight you can expect a series of posts to follow this on topics such as objects in templates, outputs and references.

If you missed my Integration Monday session, the organisers recorded it and you can watch it online.

Azure PowerShell 1.0 Preview
https://azure.Microsoft.com/en-us/blog/azps-1-0-pre/

Azure QuickStart Templates
https://github.com/Azure/azure-quickstart-templates
http://azure.microsoft.com/en-us/documentation/templates/

ARM Template Documentation
https://msdn.microsoft.com/en-us/library/azure/dn835138.aspx

Azure Resource Explorer
https://resources.azure.com/

“Azure Resource Manager DevOps Jumpstart”
https://www.microsoftvirtualacademy.com/en-US/training-courses/azure-resource-manager-devops-jump-start-8413

Complex Azure Odyssey Part Four: WAP Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. Part Three talks about deploying my ADFS server and in this final part I will show you how to configure the WAP server that faces the outside world.

The Template

The WAP server is the only one in my environment that faces the internet. Because of this the deployment is more complex. I’ve also added further complexity because I want to be able to have more than one WAP server in future, so there’s a load balancer deployed too. You can see the resource outline in the screenshot below:

wap template json

The internet-facing stuff means we need more things in our template. First up is our PublicIPAddress:

{
  "name": "[variables('vmWAPpublicipName')]",
  "type": "Microsoft.Network/publicIPAddresses",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [ ],
  "tags": {
    "displayName": "vmWAPpublicip"
  },
  "properties": {
    "publicIPAllocationMethod": "Dynamic",
    "dnsSettings": {
      "domainNameLabel": "[variables('vmWAPpublicipDnsName')]"
    }
  }
},

This is pretty straightforward stuff. The nature of my environment means that I am perfectly happy with a dynamic IP that changes if I stop and then start the environment. Access will be via the hostname assigned to that IP and I use that hostname in my ADFS service configuration and certificates. Azure builds the hostname based on a pattern and I can use that pattern in my templates, which is how I’ve created the certs when I deploy the DC and configure the ADFS service all before I’ve deployed the WAP server.

That public IP address is then bound to our load balancer which provides the internet-endpoint for our services:

{
  "apiVersion": "2015-05-01-preview",
  "name": "[variables('vmWAPlbName')]",
  "type": "Microsoft.Network/loadBalancers",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"
  ],
  "properties": {
    "frontendIPConfigurations": [
      {
        "name": "[variables('LBFE')]",
        "properties": {
          "publicIPAddress": {
            "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"
          }
        }
      }
    ],
    "backendAddressPools": [
      {
        "name": "[variables('LBBE')]"
      }
    ],
    "inboundNatRules": [
      {
        "name": "[variables('RDPNAT')]",
        "properties": {
          "frontendIPConfiguration": {
            "id": "[variables('vmWAPLbfeConfigID')]"
          },
          "protocol": "tcp",
          "frontendPort": "[variables('rdpPort')]",
          "backendPort": 3389,
          "enableFloatingIP": false
        }
      },
      {
        "name": "[variables('httpsNAT')]",
        "properties": {
          "frontendIPConfiguration": {
            "id": "[variables('vmWAPLbfeConfigID')]"
          },
          "protocol": "tcp",
          "frontendPort": "[variables('httpsPort')]",
          "backendPort": 443,
          "enableFloatingIP": false
        }
      }
    ]
  }
}

There’s a lot going on in here so let’s work through it. First of all we connect our public IP address to the load balancer. We then create a back end configuration which we will later connect our VM to. Finally we create a set of NAT rules. I need to be able to RDP into the WAP server, which is the first block. The variables define the names of my resources. You can see that I specify the ports – external through a variable that I can change, and internal directlym because I need that to be the same each time because that’s what my VMs listen on. You can see that each NAT rule is associated with the frontendIPConfiguration – opening the port to the outside world.

The next step is to create a NIC that will hook our VM up to the existing virtual network and the load balancer:

{
  "name": "[variables('vmWAPNicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/publicIPAddresses/', variables('vmWAPpublicipName'))]",
    "[concat('Microsoft.Network/loadBalancers/',variables('vmWAPlbName'))]"
  ],
  "tags": {
    "displayName": "vmWAPNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[variables('vmWAPIPAddress')]",
          "subnet": {
            "id": "[variables('vmWAPSubnetRef')]"
          },
          "loadBalancerBackendAddressPools": [
            {
              "id": "[variables('vmWAPBEAddressPoolID')]"
            }
          ],
          "loadBalancerInboundNatRules": [
            {
              "id": "[variables('vmWAPRDPNATRuleID')]"
            },
            {
              "id": "[variables('vmWAPhttpsNATRuleID')]"
            }
          ]

        }
      }
    ]
  }
}

Here you can see that the NIC is connected to a subnet on our virtual network with a static IP that I specify in a variable. It is then added to the load balancer back end address pool and finally I need to specify which of the NAT rules I created in the load balancer are hooked up to my VM. If I don’t include the binding here, traffic won’t be passed to my VM (as I discovered when developing this lot – I forgot to wire up https and as a result couldn’t access the website published by WAP!).

The VM itself is basically the same as my ADFS server. I use the same Windows Sever 2012 R2 image, have a single disk and I’ve nested the extensions within the VM because that seems to work better than not doing:

{
  "name": "[variables('vmWAPName')]",
  "type": "Microsoft.Compute/virtualMachines",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/networkInterfaces/', variables('vmWAPNicName'))]",
  ],
  "tags": {
    "displayName": "vmWAP"
  },
  "properties": {
    "hardwareProfile": {
      "vmSize": "[variables('vmWAPVmSize')]"
    },
    "osProfile": {
      "computername": "[variables('vmWAPName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "[variables('windowsImagePublisher')]",
        "offer": "[variables('windowsImageOffer')]",
        "sku": "[variables('windowsImageSKU')]",
        "version": "latest"
      },
      "osDisk": {
        "name": "[concat(variables('vmWAPName'), '-os-disk')]",
        "vhd": {
          "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmWAPName'), 'os.vhd')]"
        },
        "caching": "ReadWrite",
        "createOption": "FromImage"
      }
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmWAPNicName'))]"
        }
      ]
    }
  },
  "resources": [
    {
      "type": "extensions",
      "name": "IaaSDiagnostics",
      "apiVersion": "2015-06-15",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]"
      ],
      "tags": {
        "displayName": "[concat(variables('vmWAPName'),'/vmDiagnostics')]"
      },
      "properties": {
        "publisher": "Microsoft.Azure.Diagnostics",
        "type": "IaaSDiagnostics",
        "typeHandlerVersion": "1.4",
        "autoUpgradeMinorVersion": "true",
        "settings": {
          "xmlCfg": "[base64(variables('wadcfgx'))]",
          "StorageAccount": "[variables('storageAccountName')]"
        },
        "protectedSettings": {
          "storageAccountName": "[variables('storageAccountName')]",
          "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",
          "storageAccountEndPoint": "https://core.windows.net/"
        }
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(variables('vmWAPName'),'/WAPserver')]",
      "apiVersion": "2015-05-01-preview",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[resourceId('Microsoft.Compute/virtualMachines', variables('vmWAPName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/IaaSDiagnostics')]"
      ],
      "properties": {
        "publisher": "Microsoft.Powershell",
        "type": "DSC",
        "typeHandlerVersion": "1.7",
        "settings": {
          "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
          "configurationFunction": "[variables('vmWAPConfigurationFunction')]",
          "properties": {
            "domainName": "[variables('domainName')]",
            "adminCreds": {
              "userName": "[parameters('adminUsername')]",
              "password": "PrivateSettingsRef:adminPassword"
            }
          }
        },
        "protectedSettings": {
          "items": {
            "adminPassword": "[parameters('adminPassword')]"
          }
        }
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(variables('vmWAPName'),'/wapScript')]",
      "apiVersion": "2015-05-01-preview",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/WAPserver')]"

      ],
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.4",
        "settings": {
          "fileUris": [
            "[concat(parameters('_artifactsLocation'),'/WapServer.ps1', parameters('_artifactsLocationSasToken'))]",
            "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
            "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
          ],
          "commandToExecute": "[concat('powershell.exe -file WAPServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -adfsServerName ',variables('vmADFSName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
        }
      }
    }
  ]
}

The DSC and custom script extension are in the same vein as with ADFS. I can get the features on with DSC and then I need to configure stuff with my script.

The DSC Modules

As with the other two servers, the files copied into the VM by the DSC extension are common. I then call the appropriate configuration for the WAP server, held within my common configuration file. The WAP server configuration is shown below:

configuration WAPserver
{
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName xComputerManagement,xActiveDirectory
    
    Node localhost
    {
        WindowsFeature WAPInstall 
        { 
            Ensure = "Present" 
            Name = "Web-Application-Proxy"
        }  
        WindowsFeature WAPMgmt 
        { 
            Ensure = "Present" 
            Name = "RSAT-RemoteAccess"
        }  
        WindowsFeature ADPS
        {
            Name = "RSAT-AD-PowerShell"
            Ensure = "Present"
        } 
        xWaitForADDomain DscForestWait 
        { 
            DomainName = $DomainName 
            DomainUserCredential= $Admincreds
            RetryCount = $RetryCount 
            RetryIntervalSec = $RetryIntervalSec 
            DependsOn = "[WindowsFeature]ADPS"      
        }
        xComputer DomainJoin
        {
            Name = $env:COMPUTERNAME
            DomainName = $DomainName
            Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
            DependsOn = "[xWaitForADDomain]DscForestWait"
        }

        LocalConfigurationManager 
        {
            DebugMode = $true    
            RebootNodeIfNeeded = $true
        }
    }     
}

As with ADFS, the configuration joins the domain and adds the required features for WAP. Note that I install the RSAT tools for Remote Access. If you don’t do this, you can’t configure WAP because the powershell modules aren’t installed!

The Custom Scripts

The WAP script performs much of the same work as the ADFS script. I need to install the certificate for my service, so that’s copied onto the server by the script before it runs an invoke-command block. The main script is run as the local system account and can successfully connect to the DC as the computer account. I then run my invoke-command with domain admin credentials so I can configure WAP, and once inside the invoke-command block network access gets tricky, so I don’t do it!

#
# WapServer.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $adfsServerName,
    $vmDCname,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)

Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "adfsServerName: $adfsServerName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="


    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("WAPserver Script Executed", $info_event, 5001)


    $srcPath = "\\"+ $vmDCname + "\src"
    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $fsCertificateSubject+".pfx"
    $certPath = $srcPath + "\" + $fsCertFileName

    #Copy cert from DC
    write-verbose -Verbose "Copying $certpath to $PSScriptRoot"
#        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}"
#        Write-Verbose -Verbose $powershellCommand
#        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
#        $encodedCommand = [Convert]::ToBase64String($bytes)

#        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
        copy-item $certPath -Destination $PSScriptRoot -Verbose

Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $domainCredential,
        $adfsServerName,
        $fsServiceName,
        $vmDCname,
        $resourceLocation
    )
    # Working variables

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In WAPserver scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)
    
    Import-Module .\tuServDeployFunctions.ps1

    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $workingDir + "\" + $fsCertificateSubject+".pfx"

    Write-Verbose -Verbose "Importing sslcert $fsCertFileName"
    Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword

    $fsIpAddress = (Resolve-DnsName $adfsServerName -type a).ipaddress
    Add-HostsFileEntry -ip $fsIpAddress -domain $fsCertificateSubject


    Set-WapConfiguration -credential $domainCredential -fedServiceName $fsCertificateSubject -certificateSubject $fsCertificateSubject


} -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $adfsServerName, $fsServiceName, $vmDCname, $resourceLocation

The script modifies the HOSTS file on the server so it can find the ADFS service and then configures the Web Application Proxy for that ADFS service. It’s worth mentioning at this point the $fsCertificateSubject, which is also my service name. When we first worked on this environment using the old Azure PowerShell commands the name of the public endpoint was always <something>.cloudapp.net. When I use the new Resource Manager model I discovered that is now <something>.<Azure Location>.cloudapp.azure.com. The <something> is in our control – we specify it. The <Azure Location> isn’t quite, and is the resource location for our deployment (converted to lowercase with no spaces). You’ll find that same line of code in the DC and ADFS scripts and it’s creating the hostname our service will use based on the resource location specified in the template, passed into the script as a parameter.

The functions called by that script are shown below:

function Import-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateFileName,
        $certificatePassword
    )    

        Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"
        Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1

        Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName
        # import it
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Import-PfxCertificate –FilePath ($certificateFileName) cert:\localMachine\my -Password $password

}

function Add-HostsFileEntry
{
[CmdletBinding()]
    param
    (
        $ip,
        $domain
    )

    $hostsFile = "$env:windir\System32\drivers\etc\hosts"
    $newHostEntry = "`t$ip`t$domain";

        if((gc $hostsFile) -contains $NewHostEntry)
        {
            Write-Verbose -Verbose "The hosts file already contains the entry: $newHostEntry.  File not updated.";
        }
        else
        {
            Add-Content -Path $hostsFile -Value $NewHostEntry;
        }
}

function Set-WapConfiguration
{
[CmdletBinding()]
Param(
$credential,
$fedServiceName,
$certificateSubject
)

Write-Verbose -Verbose "Configuring WAP Role"
Write-Verbose -Verbose "---"

    #$certificate = (dir Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject}).thumbprint
    $certificateThumbprint = (get-childitem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint

    # install WAP
    Install-WebApplicationProxy –CertificateThumbprint $certificateThumbprint -FederationServiceName $fedServiceName -FederationServiceTrustCredential $credential

}

What’s Left?

This sequence of posts has talked about Resource Templates and how I structure mine based on my experience of developing and repeatedly deploying a pretty complex environment. It’s also given you specific config advice for doing the same as me: Create a Domain Controller and Certificate Authority, create an ADFS server and publish that server via a Web Application Proxy. If you only copy the stuff so far you’ll have an isolated environment that you can access via the WAP server for remote management.

I’m still working on this, however. I have a SQL server to configure. It turns out that DSC modules for SQL are pretty rich and I’ll blog on those at some point. I am also adding a BizTalk server. I suspect that will involve more on the custom script side. I then need to deploy my application itself, which I haven’t even begun yet (although the guys have created a rich set of automation PowerShell scripts to deal with the deployment).

Overall, I hope you take away from this series of posts just how powerful Azure Resource Templates can bee when pushing out IaaS solutions. I haven’t even touched on the PaaS components of Azure, but they can be dealt with in the same way. The need to learn this stuff is common across IT, Dev and DevOps, and it’s really interesting and fun to work on (if frustrating at times). I strongly encourage you to go play!

Credits

As with the previous posts, stuff I’ve talked about has been derived in part from existing resources:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Three: ADFS Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. This post will focus on the next server in the chain: The ADFS server that is required to enable authentication in the application which will eventually be installed on this environment.

The Template

The nested deployment template for the ADFS server differs little from my DC template. If anything, it’s even simpler because we don’t have to reconfigure the virtual network after deploying the VM. The screenshot below shots the JSON outline for the template.

adfs template json

You can see that it follows the same pattern as the DC template in part two. I have a VM, a NIC that it depends on and which is attached to our virtual network, and I have VM extensions within the VM itself to enable diagnostics, push a DSC configuration to the VM and execute a custom PowerShell script.

I went through the template construction in detail with the DC, so here I’ll simply show the resources code for you. The VM uses the same Windows Server base image as the DC but doesn’t need the extra disk that we attached to the DC.

"resources": [
  {
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
    ],
    "location": "[parameters('resourceLocation')]",
    "name": "[variables('vmADFSNicName')]",
    "properties": {
      "ipConfigurations": [
        {
          "name": "ipconfig1",
          "properties": {
            "privateIPAllocationMethod": "Static",
            "privateIPAddress": "[variables('vmADFSIPAddress')]",
            "subnet": {
              "id": "[variables('vmADFSSubnetRef')]"
            }
          }
        }
      ]
    },
    "tags": {
      "displayName": "vmADFSNic"
    },
    "type": "Microsoft.Network/networkInterfaces"
  },
  {
    "name": "[variables('vmADFSName')]",
    "type": "Microsoft.Compute/virtualMachines",
    "location": "[parameters('resourceLocation')]",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
      "[concat('Microsoft.Network/networkInterfaces/', variables('vmADFSNicName'))]",
    ],
    "tags": {
      "displayName": "vmADFS"
    },
    "properties": {
      "hardwareProfile": {
        "vmSize": "[variables('vmADFSVmSize')]"
      },
      "osProfile": {
        "computername": "[variables('vmADFSName')]",
        "adminUsername": "[parameters('adminUsername')]",
        "adminPassword": "[parameters('adminPassword')]"
      },
      "storageProfile": {
        "imageReference": {
          "publisher": "[variables('windowsImagePublisher')]",
          "offer": "[variables('windowsImageOffer')]",
          "sku": "[variables('windowsImageSKU')]",
          "version": "latest"
        },
        "osDisk": {
          "name": "[concat(variables('vmADFSName'), '-os-disk')]",
          "vhd": {
            "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmADFSName'), 'os.vhd')]"
          },
          "caching": "ReadWrite",
          "createOption": "FromImage"
        }
      },
      "networkProfile": {
        "networkInterfaces": [
          {
            "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmADFSNicName'))]"
          }
        ]
      }
    },
    "resources": [
      {
        "type": "extensions",
        "name": "IaaSDiagnostics",
        "apiVersion": "2015-06-15",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]"
        ],
        "tags": {
          "displayName": "[concat(variables('vmADFSName'),'/vmDiagnostics')]"
        },
        "properties": {
          "publisher": "Microsoft.Azure.Diagnostics",
          "type": "IaaSDiagnostics",
          "typeHandlerVersion": "1.4",
          "autoUpgradeMinorVersion": "true",
          "settings": {
            "xmlCfg": "[base64(variables('wadcfgx'))]",
            "StorageAccount": "[variables('storageAccountName')]"
          },
          "protectedSettings": {
            "storageAccountName": "[variables('storageAccountName')]",
            "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",
            "storageAccountEndPoint": "https://core.windows.net/"
          }
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[concat(variables('vmADFSName'),'/ADFSserver')]",
        "apiVersion": "2015-05-01-preview",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[resourceId('Microsoft.Compute/virtualMachines', variables('vmADFSName'))]",
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/IaaSDiagnostics')]"
        ],
        "properties": {
          "publisher": "Microsoft.Powershell",
          "type": "DSC",
          "typeHandlerVersion": "1.7",
          "settings": {
            "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
            "configurationFunction": "[variables('vmADFSConfigurationFunction')]",
            "properties": {
              "domainName": "[variables('domainName')]",
              "vmDCName": "[variables('vmDCName')]",
              "adminCreds": {
                "userName": "[parameters('adminUsername')]",
                "password": "PrivateSettingsRef:adminPassword"
              }
            }
          },
          "protectedSettings": {
            "items": {
              "adminPassword": "[parameters('adminPassword')]"
            }
          }
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[concat(variables('vmADFSName'),'/adfsScript')]",
        "apiVersion": "2015-05-01-preview",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]",
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/ADFSserver')]"
        ],
        "properties": {
          "publisher": "Microsoft.Compute",
          "type": "CustomScriptExtension",
          "typeHandlerVersion": "1.4",
          "settings": {
            "fileUris": [
              "[concat(parameters('_artifactsLocation'),'/AdfsServer.ps1', parameters('_artifactsLocationSasToken'))]",
              "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
              "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
            ],
            "commandToExecute": "[concat('powershell.exe -file AdfsServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
          }
        }
      }
    ]
  },
  ]

The DSC Modules

All the DSC modules I need get zipped into the same archive file which is deployed by each DSC extension to the VMs. I showed you that in part one. For the ADFS server, the extension calls the configuration module DSCvmConfigs.ps1\\ADFSserver (note the escaped slash) – the ADFSserver configuration within my single DSCvmConfigs.ps1 file that holds all my configurations. As with the DC configuration, this is based on stuff held in the SharePoint farm template on GitHub.

configuration ADFSserver
{
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [String]$vmDCName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName xComputerManagement,xActiveDirectory

    Node localhost
    {
        WindowsFeature ADFSInstall 
        { 
            Ensure = "Present" 
            Name = "ADFS-Federation"
        }  
        WindowsFeature ADPS
        {
            Name = "RSAT-AD-PowerShell"
            Ensure = "Present"

        } 
        xWaitForADDomain DscForestWait 
        { 
            DomainName = $DomainName 
            DomainUserCredential= $Admincreds
            RetryCount = $RetryCount 
            RetryIntervalSec = $RetryIntervalSec 
            DependsOn = "[WindowsFeature]ADPS"      
        }
        xComputer DomainJoin
        {
            Name = $env:COMPUTERNAME
            DomainName = $DomainName
            Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
            DependsOn = "[xWaitForADDomain]DscForestWait"
        }


        LocalConfigurationManager 
        {
            DebugMode = $true    
            RebootNodeIfNeeded = $true
        }
    }     
}

The DSC for my ADFS server does much less than that of the DC. It installs the Windows features I need (the RSAT-AD-PowerShell tools are needed by the xWaitForADDomain config), makes sure our domain is contactable and joins the server to it. Unfortunately there are no DSC resources around to configure our ADFS server at the moment and whilst I’m happy writing scripts to to that work, I’m less comfortable writing DSC modules right now!

The Custom Scripts

Once our DSC extension has joined the domain and added our features, it’s over to the customscript extension to configure the ADFS service. As with the DC, I copy down the script itself, a file with my own functions in and the PSPKI module.

#
# AdfsServer.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $vmDCname,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)

Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("ADFSserver Script Executed", $info_event, 5001)


    $srcPath = "\\"+ $vmDCname + "\src"
    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ",[System.String]::Empty)).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $fsCertificateSubject+".pfx"
    $certPath = $srcPath + "\" + $fsCertFileName

    #Copy cert from DC
    write-verbose -Verbose "Copying $certpath to $PSScriptRoot"
#        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}"
#        Write-Verbose -Verbose $powershellCommand
#        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
#        $encodedCommand = [Convert]::ToBase64String($bytes)

#        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
        copy-item $certPath -Destination $PSScriptRoot -Verbose

Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $domainCredential,
        $fsServiceName,
        $vmDCname,
        $resourceLocation
    )
    # Working variables

    Write-Verbose -Verbose "Entering ADFS Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In ADFSserver scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)
    
    Write-Verbose -Verbose "Importing PSPKI"
    Import-Module .\tuServDeployFunctions.ps1


    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $workingDir + "\" + $fsCertificateSubject+".pfx"

    Write-Verbose -Verbose "Importing sslcert $fsCertFileName"
    Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword


    $adfsServiceAccount = $env:USERDOMAIN+"\"+"svc_adfs"
    $adfsPassword = ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force 
    $adfsCredentials = New-Object System.Management.Automation.PSCredential ($adfsServiceAccount, $adfsPassword) 
    $adfsDisplayName = "ADFS Service"

    Write-Verbose -Verbose "Creating ADFS Farm"
    Create-ADFSFarm -domainCredential $domainCredential -adfsName $fsCertificateSubject -adfsDisplayName $adfsDisplayName -adfsCredentials $adfsCredentials -certificateSubject $fsCertificateSubject


} -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $fsServiceName, $vmDCname, $resourceLocation

The script starts by copying the certificate files from the DC. The script extension shells the script as the local system account, so it connects to the share on the DC as the computer account. I copy the files before I execute an invoke-command block that run as the domain admin. I do this because once I’m in that invoke-command block, network access becomes a real pain!

As you can see, this script doesn’t do a huge amount. Once in the invoke-command it unzips the PSPKI modules, imports the certificate it needs into the computer cert store and then calls a function to configure the ADFS service. The functions called by the script are below:

function Import-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateFileName,
        $certificatePassword
    )    

        Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"
        Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1

        Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName
        # import it
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Import-PfxCertificate –FilePath ($certificateFileName) cert:\localMachine\my -Password $password

}

function Create-ADFSFarm
{
[CmdletBinding()]
param
(
$domainCredential,
$adfsName, 
$adfsDisplayName, 
$adfsCredentials,
$certificateSubject
)

    Write-Verbose -Verbose "In Function Create-ADFS Farm"
    Write-Verbose -Verbose "Parameters:"
    Write-Verbose -Verbose "adfsName: $adfsName"
    Write-Verbose -Verbose "certificateSubject: $certificateSubject"
    Write-Verbose -Verbose "adfsDisplayName: $adfsDisplayName"
    Write-Verbose -Verbose "adfsCredentials: $adfsCredentials"
    Write-Verbose -Verbose "============================================"

    Write-Verbose -Verbose "Importing Module"
    Import-Module ADFS
    Write-Verbose -Verbose "Getting Thumbprint"
    $certificateThumbprint = (get-childitem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint
    Write-Verbose -Verbose "Thumprint is $certificateThumbprint"
    Write-Verbose -Verbose "Install ADFS Farm"

    Write-Verbose -Verbose "Echo command:"
    Write-Verbose -Verbose "Install-AdfsFarm -credential $domainCredential -CertificateThumbprint $certificateThumbprint -FederationServiceDisplayName '$adfsDisplayName' -FederationServiceName $adfsName -ServiceAccountCredential $adfsCredentials"
    Install-AdfsFarm -credential $domainCredential -CertificateThumbprint $certificateThumbprint -FederationServiceDisplayName "$adfsDisplayName" -FederationServiceName $adfsName -ServiceAccountCredential $adfsCredentials -OverwriteConfiguration

}
There’s still stuff do do on the ADFS server once I get to deploying my application: I need to define relying party trusts and custom claims, for example. However, this deployment creates a working ADFS server that will authenticate users against my domain. It’s then published to the outside world safely by the Web Application Proxy role on my WAP server.

Credit Where It’s Due

Same as before – I stand on the shoulders of others to bring you this stuff:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Two: Domain Controller

In part one of this series of posts I talked about the project driving my creation of these Azure Resource Templates, the structure of the template and what resource I was deploying. This post will go through the deployment and configuration of the first VM which will become my domain controller and certificate server. In order to achieve my goals I need to deploy the VM, the DSC extension and finally the custom script extension to perform actions that current DSC modules can’t. I’ll show you the template code, the DSC code and the final scripts and talk about the gotchas I encountered on the way.

Further posts will detail the ADFS and WAP server deployments.

The Template

I’ve already talked about how I’ve structured this project: A core template calls a collection of nested templates – one per VM. The DC template differs from the rest in that it too calls a nested deployment to make changes to my virtual network. Other than that, it follows the same convention.

dc template json view_thumb[2]

The screenshot above is the JSON outline view of the template. Each of my nested VM templates follows the same pattern: The parameters block in each template is exactly the same. I’m using a standard convention for naming all my resources, so providing I pass the envPrefix parameter between each one I can calculate the name of any resource in the project. That’s important, as we’ll see in a moment. The variables block contains all the variables that the current template needs – things like the IP address that should be assigned or the image we use as our base for the VM. Finally, the resources section holds the items we are deploying to create the domain controller. This VM is isolated from the outside world so we need the VM itself and a NIC to connect it to our virtual network, nothing more. The network is created by the core template before it calls the DC template.

The nested deployment needs explaining. Once we’ve created our domain controller we need to make sure that all our other VMs receive the correct IP address for their DNS. In order to do that we have to reconfigure the virtual network that we have already deployed. The nested deployment here is an artefact of the original approach with a single template – it could actually be fully contained in the DC template.

To explain: We can only define a resource with a given type and name in a template once. Templates are declarative and describe how we want a resource to be configured. With our virtual network we want to reconfigure it after we have deployed subsequent resources. If we describe the network for a second time, the new configuration is applied to our existing resource. The problem is that we have already got a resource in our template for our network. We get around the problem by calling a nested deployment. That deployment is a copy of the network configuration, with the differences we need for our reconfiguration. In my original template which contained all the resources, that nested deployment depended on the DC being deployed and was then called. It had to be a nested deployment because the network was already in there once.

With my new model I could actually just include the contents of the network reconfiguration deployment directly in the DC template. I am still calling the nested resource simply because of the way I split my original template. The end result is the same. The VM gets created, then the DSC and script extensions run to turn it into a domain controller. The network template is then called to set the DNS IP configuration of the network to be the IP address of the newly-minted DC.

{
  "name": "tuServUpdateVnet",
  "type": "Microsoft.Resources/deployments",
  "apiVersion": "2015-01-01",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/dcScript')]"
  ],
  "properties": {
    "mode": "Incremental",
    "templateLink": {
      "uri": "[concat(variables('updateVNetDNSTemplateURL'), parameters('_artifactsLocationSasToken'))]",
      "contentVersion": "1.0.0.0"
    },
    "parameters": {
      "resourceLocation": { "value": "[parameters('resourceLocation')]" },
      "virtualNetworkName": { "value": "[variables('virtualNetworkName')]" },
      "virtualNetworkPrefix": { "value": "[variables('virtualNetworkPrefix')]" },
      "virtualNetworkSubnet1Name": { "value": "[variables('virtualNetworkSubnet1Name')]" },
      "virtualNetworkSubnet1Prefix": { "value": "[variables('virtualNetworkSubnet1Prefix')]" },
      "virtualNetworkDNS": { "value": [ "[variables('vmDCIPAddress')]" ] }
    }
  }
}

The code above is contained in my DC template. It calls the nested deployment through a URI to the template. That points to an azure storage container with all the resources for my deployment held in it. The template is called with a set of parameters that are mostly variables created in the DC template in accordance with the rules and patterns I’ve set. Everything is the same as the original network deployment with the exception of the DNS address which is to be set to the DC address. Below is the network template. Note that the parameter block defines parameters that match those being passed in. All names are case sensitive.

{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourceLocation": {
            "type": "string",
            "defaultValue": "West US",
            "allowedValues": [
                "East US",
                "West US",
                "West Europe",
                "North Europe",
                "East Asia",
                "South East Asia"
            ],
            "metadata": {
                "description": "The region to deploy the storage resources into"
            }
        },
        "virtualNetworkName": {
            "type": "string"
        },
        "virtualNetworkDNS": {
            "type": "array"
        },
        "virtualNetworkPrefix": {
            "type": "string"
        },
        "virtualNetworkSubnet1Name": {
            "type": "string"
        },
        "virtualNetworkSubnet1Prefix": {
            "type": "string"
        }
    },
        "variables": {
        },
        "resources": [
            {
                "name": "[parameters('virtualNetworkName')]",
                "type": "Microsoft.Network/virtualNetworks",
                "location": "[parameters('resourceLocation')]",
                "apiVersion": "2015-05-01-preview",
                "tags": {
                    "displayName": "virtualNetworkUpdate"
                },
                "properties": {
                    "addressSpace": {
                        "addressPrefixes": [
                            "[parameters('virtualNetworkPrefix')]"
                        ]
                    },
                    "dhcpOptions": {
                        "dnsServers": "[parameters('virtualNetworkDNS')]"
                    },

                    "subnets": [
                        {
                            "name": "[parameters('virtualNetworkSubnet1Name')]",
                            "properties": {
                                "addressPrefix": "[parameters('virtualNetworkSubnet1Prefix')]"
                            }
                        }
                    ]
                }
            }
        ],
        "outputs": {
        }
    }

The VM itself is pretty straightforward. The code below deploys a virtual NIC and then the VM. The NIC needs to be created first and is then bound to the VM when the latter is deployed. The snippet has the nested resources for the VM extensions removed. I’ll show you those in a bit.

{
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "location": "[parameters('resourceLocation')]",
  "name": "[variables('vmDCNicName')]",
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[variables('vmDCIPAddress')]",
          "subnet": {
            "id": "[variables('vmDCSubnetRef')]"
          }
        }
      }
    ]
  },
  "tags": {
    "displayName": "vmDCNic"
  },
  "type": "Microsoft.Network/networkInterfaces"
},
{
  "name": "[variables('vmDCName')]",
  "type": "Microsoft.Compute/virtualMachines",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/networkInterfaces/', variables('vmDCNicName'))]"
  ],
  "tags": {
    "displayName": "vmDC"
  },
  "properties": {
    "hardwareProfile": {
      "vmSize": "[variables('vmDCVmSize')]"
    },
    "osProfile": {
      "computername": "[variables('vmDCName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "[variables('windowsImagePublisher')]",
        "offer": "[variables('windowsImageOffer')]",
        "sku": "[variables('windowsImageSKU')]",
        "version": "latest"
      },
      "osDisk": {
        "name": "[concat(variables('vmDCName'), '-os-disk')]",
        "vhd": {
          "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmDCName'), 'os.vhd')]"
        },
        "caching": "ReadWrite",
        "createOption": "FromImage"
      },
      "dataDisks": [
        {
          "vhd": {
            "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'),'/', variables('vmDCName'),'data-1.vhd')]"
          },
          "name": "[concat(variables('vmDCName'),'datadisk1')]",
          "createOption": "empty",
          "caching": "None",
          "diskSizeGB": "[variables('windowsDiskSize')]",
          "lun": 0
        }
      ]
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmDCNicName'))]"
        }
      ]
    }
  },
  "resources": [
  ]
}

The NIC is pretty simple. I tell it the name of the subnet on my network I want it to connect to and I tell it that I want to use a static private IP address, and what that address is. The VM resource then references the NIC in the networkProfile section.

The VM itself is built using the Windows Server 2012 R2 Datacentre image provided by Microsoft. That is specified in the imageReference section. There are lots of VM images and each is reference by publisher (in this case MicrosoftWindowsServer), offer (WindowsServer) and SKU (2012-R2-Datacenter). I’m specifying ‘latest’ as the version but you can be specific if you have built your deployment around a specific version of an image. They are updated regularly to include patches… There are a wide range of images available to save you time. My full deployment makes use of a SQL Server image and I’m also playing with a BizTalk image right now. It’s much easier than trying to sort out the install of products yourself, and the licence cost of the software gets rolled into the VM charge.

We need to add a second disk to our VM to hold the domain databases. The primary disk on a VM has read and write caching enabled. Write caching exposes us to risk of corrupting our domain database in the event of a failure, so I’m adding a second disk and setting the caching on that to none. It’s all standard stuff at this point.

I’m not going to describe the IaaSDiagnostics extension. The markup for that is completely default as provided by the tooling when you add the resource. Let’s move on to the DSC extension.

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('vmDCName'),'/InstallDomainController')]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[resourceId('Microsoft.Compute/virtualMachines', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/IaaSDiagnostics')]"
  ],
  "properties": {
    "publisher": "Microsoft.Powershell",
    "type": "DSC",
    "typeHandlerVersion": "1.7",
    "settings": {
      "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
      "configurationFunction": "[variables('vmDCConfigurationFunction')]",
      "properties": {
        "domainName": "[variables('domainName')]",
        "adminCreds": {
          "userName": "[parameters('adminUsername')]",
          "password": "PrivateSettingsRef:adminPassword"
        }
      }
    },
    "protectedSettings": {
      "items": {
        "adminPassword": "[parameters('adminPassword')]"
      }
    }
  }
}

I should mention at this point that I am nesting the extensions within the VM resources section. You don’t need to do this – they can be resources at the same level as the VM. However, my experience from deploying this lot a gazillion times is that if I nest the extensions I get a more robust deployment. Pulling them out of the VM appears to increase the chance of the extension failing to deploy.

The DSC extension will do different things depending on the OS version of Windows you are using. For my 2012 R2 VM it will install the necessary required software to use Desired State Configuration and it will then reboot the VM before applying any config. On the current Server 2016 preview images that installation and reboot isn’t needed as the pre-reqs are already installed.

The DSC extension needs to copy your DSC modules and configuration onto the VM. That’s specified in the modulesURL setting and it expects a zip archive with your stuff in it. I’ll show you that when we look at the DSC config in detail later. The configurationFunction setting specifies the PowerShell file that contains the function and the name of the configuration in that file to use. I have all the DSC configs in one file so I pass in DSCvmConfigs.ps1\\DomainController (note the escaped slash).

Finally, we specify the parameters that we want to pass into our PowerShell DSC function. We’re specifying the name of our Domain and the credentials for our admin account.

Once the DSC module has completed I need to do final configuration with standard PowerShell scripts. The customScript Extension is our friend here. Documentation on this is somewhat sparse and I’ve already blogged on the subject to help you. The template code is below:

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('vmDCName'),'/dcScript')]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/InstallDomainController')]"
  ],
  "properties": {
    "publisher": "Microsoft.Compute",
    "type": "CustomScriptExtension",
    "typeHandlerVersion": "1.4",
    "settings": {
      "fileUris": [
        "[concat(parameters('_artifactsLocation'),'/DomainController.ps1', parameters('_artifactsLocationSasToken'))]",
        "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
        "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
      ],
      "commandToExecute": "[concat('powershell.exe -file DomainController.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -tsServiceName ',variables('vmTWAPpublicipDnsName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
    }
  }
}

The module downloads the files I need which in this case is a zip containing the PSPKI PowerShell modules that I reference to perform a bunch of certificate functions, a module of my own functions and finally the DomainController.ps1 script that is executed by the extension. You can’t specify parameters for your script in the extension (and in fact you can’t call the script directly – you have to execute the powershell.exe command yourself) so you can see that I build the commandToExecute from using a bunch of variables and string concatenation.

The DSC Modules

I need to get the DSC modules I use onto the VM. To save my going mad, that means I include the module source in the Visual Studio solution. Over time I’ve evolved a folder structure within the solution to separate templates, DSC files and script files. You can see this structure in the screenshot below.

dsc modules_thumb[2]

I keep all the DSC together like this because I can then simply zip all the files in the DSC folder structure to give me the archive that is deployed by the DSC extension. In the picture you will see that there are a number of .ps1 files in the root. Originally I created separate files for the DSC configuration of each of my VMs. I then collapsed those into the DSCvmConfigs.ps1 files and I simply haven’t removed the others from the project.

My DomainController configuration function began life as the example code from the three server SharePoint template on Github and I have since extended and modified it. The code is shown below:

configuration DomainController 
{ 
   param 
   ( 
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [String]$DomainNetbiosName=(Get-NetBIOSName -DomainName $DomainName),
        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    ) 
    
    Import-DscResource -ModuleName xComputerManagement, cDisk, xDisk, xNetworking, xActiveDirectory, xSmbShare, xAdcsDeployment
    [System.Management.Automation.PSCredential ]$DomainCreds = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
    $Interface=Get-NetAdapter|Where Name -Like "Ethernet*"|Select-Object -First 1
    $InteraceAlias=$($Interface.Name)

    Node localhost
    {
        WindowsFeature DNS 
        { 
            Ensure = "Present" 
            Name = "DNS"
        }
        xDnsServerAddress DnsServerAddress 
        { 
            Address        = '127.0.0.1' 
            InterfaceAlias = $InteraceAlias
            AddressFamily  = 'IPv4'
        }
        xWaitforDisk Disk2
        {
             DiskNumber = 2
             RetryIntervalSec =$RetryIntervalSec
             RetryCount = $RetryCount
        }
        cDiskNoRestart ADDataDisk
        {
            DiskNumber = 2
            DriveLetter = "F"
        }
        WindowsFeature ADDSInstall 
        { 
            Ensure = "Present" 
            Name = "AD-Domain-Services"
        }  
        xADDomain FirstDS 
        {
            DomainName = $DomainName
            DomainAdministratorCredential = $DomainCreds
            SafemodeAdministratorPassword = $DomainCreds
            DatabasePath = "F:\NTDS"
            LogPath = "F:\NTDS"
            SysvolPath = "F:\SYSVOL"
        }
        WindowsFeature ADCS-Cert-Authority
        {
               Ensure = 'Present'
               Name = 'ADCS-Cert-Authority'
               DependsOn = '[xADDomain]FirstDS'
        }
        WindowsFeature RSAT-ADCS-Mgmt
        {
               Ensure = 'Present'
               Name = 'RSAT-ADCS-Mgmt'
               DependsOn = '[xADDomain]FirstDS'
        }
        File SrcFolder
        {
            DestinationPath = "C:\src"
            Type = "Directory"
            Ensure = "Present"
            DependsOn = "[xADDomain]FirstDS"
        }
        xSmbShare SrcShare
        {
            Ensure = "Present"
            Name = "src"
            Path = "C:\src"
            FullAccess = @("Domain Admins","Domain Computers")
            ReadAccess = "Authenticated Users"
            DependsOn = "[File]SrcFolder"
        }
        xADCSCertificationAuthority ADCS
        {
            Ensure = 'Present'
            Credential = $DomainCreds
            CAType = 'EnterpriseRootCA'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'              
        }
        WindowsFeature ADCS-Web-Enrollment
        {
            Ensure = 'Present'
            Name = 'ADCS-Web-Enrollment'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'
        }
        xADCSWebEnrollment CertSrv
        {
            Ensure = 'Present'
            Name = 'CertSrv'
            Credential = $DomainCreds
            DependsOn = '[WindowsFeature]ADCS-Web-Enrollment','[xADCSCertificationAuthority]ADCS'
        } 
         
        LocalConfigurationManager 
        {
            DebugMode = $true
            RebootNodeIfNeeded = $true
        }
   }
} 

The .ps1 file contains all the DSC configurations for my environment. The DomainController configuration starts with a list of parameters. These match the ones being passed in by the DSC extension, or have default or calculated values. The import-dscresource command specifies the DSC modules that the configuration needs. I have to ensure that any I am using are included in the zip files downloaded by the extension. I am using modules that configure disks, network shares, active directory domains and certificate services.

The node section then declares my configuration. You can set configurations for multiple hosts in a single DSC configuration block, but I’m only concerned with the host I’m on – localhost. Within the block I then declare what I want the configuration of the host to be. It’s the job of the DSC modules to apply whatever actions are necessary to set the configuration to that which I specify. Just like in our resource template, DSC settings can depend on one another if something needs to be done before something else.

This DSC configuration installs the windows features needed for creating a domain controller. It looks for the additional drive on the VM and assigns it the drive letter F. It creates the new Active Directory domain and places the domain database files on drive F. Once the domain is up and running I create a folder on drive C called src and share that folder. I’m doing that because I create two certificates later and I need to make them available to other machines in the domain. More on that in a bit. Finally, we install the certificate services features and configure a certificate authority. The LocalConfigurationManager settings turn on as much debug output as I can and tell the system that if any of the actions in my config demand a reboot that’s OK – restart as and when required rather than waiting until the end.

I’d love to do all my configuration with DSC but sadly there just aren’t the modules yet. There are some things I just can’t do, like creating a new certificate template in my CA and then generating some specific templates for my ADFS services that are on other VMs. I also can’t set file rights on a folder, although I can set rights on a share. Notice that I grant access to my share to Domain Computers. Both the DSC modules and the custom script extension command are run as the local system account. When I try to read files over the network that means I am connecting to the share as the Computer account and I need to grant access. When I create the DC there are no other VMs in the domain, so I use the Domain Computers group to make sure all my servers will be able to access the files.

Once the DC module completes I have a working domain with a certificate authority.

The Custom Scripts

As with my DSC modules, I keep all the custom scripts for my VMs in one folder within the solution. All of these need to be uploaded to Azure storage so I can access them with the extension and copy them to my VMs. The screenshot below shows the files in the solution. I have a script for each VM that needs one, which is executed by the extension. I then have a file of shared functions and a zip with supporting modules that I need.

custom scripts_thumb[2]

#
# DomainController.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $tsServiceName,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)
Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "tsServiceName: $tsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("DomainController Script Executed", $info_event, 5001)


Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $fsServiceName,
        $tsServiceName,
        $resourceLocation
    )
    # Working variables
    $serviceAccountOU = "Service Accounts"
    Write-Verbose -Verbose "Entering Domain Controller Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "tsServiceName: $tsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="


    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In DomainController scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)

    Import-Module .\tuServDeployFunctions.ps1

    #Enable CredSSP in server role for delegated credentials
    Enable-WSManCredSSP -Role Server –Force

    #Create OU for service accounts, computer group; create service accounts
    Add-ADServiceAccounts -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU -password $vmAdminPassword
    Add-ADComputerGroup -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU
    Add-ADComputerGroupMember -group "tuServ Computers" -member ($env:COMPUTERNAME + '$')

    #Create new web server cert template
    $certificateTemplate = ($env:USERDOMAIN + "_WebServer")
    Generate-NewCertificateTemplate -certificateTemplateName $certificateTemplate -certificateSourceTemplateName "WebServer"
    Set-tsCertificateTemplateAcl -certificateTemplate $certificateTemplate -computers "tuServComputers"

    # Generate SSL Certificates

    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    Generate-SSLCertificate -certificateSubject $fsCertificateSubject -certificateTemplate $certificateTemplate
    $tsCertificateSubject = $tsServiceName + ".northeurope.cloudapp.azure.com"
    Generate-SSLCertificate -certificateSubject $tsCertificateSubject -certificateTemplate $certificateTemplate

    # Export Certificates
    $fsCertExportFileName = $fsCertificateSubject+".pfx"
    $fsCertExportFile = $workingDir+"\"+$fsCertExportFileName
    Export-SSLCertificate -certificateSubject $fsCertificateSubject -certificateExportFile $fsCertExportFile -certificatePassword $vmAdminPassword
    $tsCertExportFileName = $tsCertificateSubject+".pfx"
    $tsCertExportFile = $workingDir+"\"+$tsCertExportFileName
    Export-SSLCertificate -certificateSubject $tsCertificateSubject -certificateExportFile $tsCertExportFile -certificatePassword $vmAdminPassword

    #Set permissions on the src folder
    $acl = Get-Acl c:\src
    $acl.SetAccessRuleProtection($True, $True)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain Computers","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Authenticated Users","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    Set-Acl c:\src $acl


    #Create src folder to store shared files and copy certs to it
    Copy-Item -Path "$workingDir\*.pfx" c:\src

} -ArgumentList $PSScriptRoot, $vmAdminPassword, $fsServiceName, $tsServiceName, $resourceLocation

The domain controller script is shown above. There are a whole bunch of write-verbose commands that output debug which I can see through the Azure Resource Explorer as the script runs.

Pretty much the first thing I do here is an invoke-command. The script is running as local system and there’s not much I can actually do as that account. My invoke-command block runs as the domain administrator so I can get stuff done. Worth noting is that the invoke-command approach makes accessing network resources tricky. It’s not an issue here but it bit me with the ADFS and WAP servers.

I unzip the PSPKI archive that has been copied onto the server and load the modules therein. The files are downloaded to a folder that is in a structure including the version number of the script extension so I can’t be explicit. Fortunately I can use the $PSScriptRoot variable to workout that location and I pass it into the invoke-command as $workingDir. The PSKPI modules allow me to create a new certificate template on my CA so I can generate new certs with exportable private keys. I need the same certs on more than one of my servers so I need to be able to copy them around. I generate the certs and drop them into the src folder I created with DSC. I also set the rights on that src folder to grant Domain Computers and Authenticated Users access. The latter is probably overdoing it, since the former should do what I need, but I spent a good deal of time being stymied by this so I’m taking a belt and braces approach.

The key functions called by the script above are shown below. Held in my modules file, these are all focused on certificate functions and pretty much all depend on the PSPKI modules.

function Generate-NewCertificateTemplate
{
    [CmdletBinding()]
    # note can only be run on the server with PSPKI eg the ActiveDirectory domain controller
    param
    (
        $certificateTemplateName,
        $certificateSourceTemplateName        
    )

    Write-Verbose -Verbose "Generating New Certificate Template" 

        Import-Module .\PSPKI\pspki.psm1
        
        $certificateCnName = "CN="+$certificateTemplateName

        $ConfigContext = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext 
        $ADSI = [ADSI]"LDAP://CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext" 

        $NewTempl = $ADSI.Create("pKICertificateTemplate", $certificateCnName) 
        $NewTempl.put("distinguishedName","$certificateCnName,CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext") 

        $NewTempl.put("flags","66113")
        $NewTempl.put("displayName",$certificateTemplateName)
        $NewTempl.put("revision","4")
        $NewTempl.put("pKIDefaultKeySpec","1")
        $NewTempl.SetInfo()

        $NewTempl.put("pKIMaxIssuingDepth","0")
        $NewTempl.put("pKICriticalExtensions","2.5.29.15")
        $NewTempl.put("pKIExtendedKeyUsage","1.3.6.1.5.5.7.3.1")
        $NewTempl.put("pKIDefaultCSPs","2,Microsoft DH SChannel Cryptographic Provider, 1,Microsoft RSA SChannel Cryptographic Provider")
        $NewTempl.put("msPKI-RA-Signature","0")
        $NewTempl.put("msPKI-Enrollment-Flag","0")
        $NewTempl.put("msPKI-Private-Key-Flag","16842768")
        $NewTempl.put("msPKI-Certificate-Name-Flag","1")
        $NewTempl.put("msPKI-Minimal-Key-Size","2048")
        $NewTempl.put("msPKI-Template-Schema-Version","2")
        $NewTempl.put("msPKI-Template-Minor-Revision","2")
        $NewTempl.put("msPKI-Cert-Template-OID","1.3.6.1.4.1.311.21.8.287972.12774745.2574475.3035268.16494477.77.11347877.1740361")
        $NewTempl.put("msPKI-Certificate-Application-Policy","1.3.6.1.5.5.7.3.1")
        $NewTempl.SetInfo()

        $WATempl = $ADSI.psbase.children | where {$_.Name -eq $certificateSourceTemplateName}
        $NewTempl.pKIKeyUsage = $WATempl.pKIKeyUsage
        $NewTempl.pKIExpirationPeriod = $WATempl.pKIExpirationPeriod
        $NewTempl.pKIOverlapPeriod = $WATempl.pKIOverlapPeriod
        $NewTempl.SetInfo()
        
        $certTemplate = Get-CertificateTemplate -Name $certificateTemplateName
        Get-CertificationAuthority | Get-CATemplate | Add-CATemplate -Template $certTemplate | Set-CATemplate
}

function Set-tsCertificateTemplateAcl
{
    [CmdletBinding()]
    param
    (
    $certificateTemplate,
    $computers
    )

    Write-Verbose -Verbose "Setting ACL for cert $certificateTemplate to allow $computers"
    Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1
        
        Write-Verbose -Verbose "Adding group $computers to acl for cert $certificateTemplate"
        Get-CertificateTemplate -Name $certificateTemplate | Get-CertificateTemplateAcl | Add-CertificateTemplateAcl -User $computers -AccessType Allow -AccessMask Read, Enroll | Set-CertificateTemplateAcl

}

function Generate-SSLCertificate
{
    [CmdletBinding()]
    param
    (
    $certificateSubject,
    $certificateTemplate
    )

    Write-Verbose -Verbose "Creating SSL cert using $certificateTemplate for $certificateSubject"
    Write-Verbose -Verbose "---"
    
    Import-Module .\PSPKI\pspki.psm1

    Write-Verbose -Verbose "Generating Certificate (Single)"
        $certificateSubjectCN = "CN=" + $certificateSubject
        # Version #1
        $powershellCommand = "& {get-certificate -Template " + $certificateTemplate + " -CertStoreLocation Cert:\LocalMachine\My -DnsName " + $certificateSubject + " -SubjectName " + $certificateSubjectCN + " -Url ldap:}"
        Write-Verbose -Verbose $powershellCommand
        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
        $encodedCommand = [Convert]::ToBase64String($bytes)

        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
}

function Export-SSLCertificate
{
    [CmdletBinding()]
    param
    (
    $certificateSubject,
    $certificateExportFile,
    $certificatePassword
    )

    Write-Verbose -Verbose "Exporting cert $certificateSubject to $certificateExportFile with password $certificatePassword"
    Write-Verbose -Verbose "---"

    Import-Module .\PSPKI\pspki.psm1

    Write-Verbose -Verbose "Exporting Certificate (Single)"
    
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Get-ChildItem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject -and $_.Subject -ne $_.Issuer} | Export-PfxCertificate -FilePath $certificateExportFile -Password $password

}

Making sure it’s reusable

One of the things I’m trying to do here is create a collection of reusable configurations. I can take my DC virtual machine config and make it the core of any number of deployments in future. Key stuff like domain names and machine names are always parameterised all the way through template, DSC and scripts. When Azure Stack arrives I should be able to use the same configuration on-prem and in Azure itself and we can use the same building blocks for any number of customer projects, even though it was originally built for an internal project.

There’s stuff I need to do here: I need to pull the vNet template directly into the DC template – there’s no need for it to be separate; I could do with trimming back some of the access rights I grant on the folders and shares that are unnecessary; you’ll also notice that I am configuring CredSSP which was part of my original attempt to sort out file access from within the invoke-command blocks and failed miserably.

A quick round of credits

Whilst most of this work has been myself, bashing my head against the desk for a while, it is built upon code created by other people who need to be referenced:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Speaking at CloudBurst in September

I’ve never been to Sweden, so I’m really looking forward to September, when I’ll be speaking at CloudBurst. Organised by the Swedish Azure User Group (SWAG – love it!), this conference is also streamed and recorded and the sessions will be available on Channel 9. The list of speakers and topics promise some high-quality and interesting sessions and I urge you to attend if you can, and tune in to the live stream if you can’t.

I’ll be spending an hour telling you about Azure Resource Templates: What they are, why you should use them, and I’ll show the work I’ve been doing as an example of a complex deployment.