When software attacks!

Thoughts and musings on anything that comes to mind

Complex Azure Odyssey Part Four: WAP Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. Part Three talks about deploying my ADFS server and in this final part I will show you how to configure the WAP server that faces the outside world.

The Template

The WAP server is the only one in my environment that faces the internet. Because of this the deployment is more complex. I’ve also added further complexity because I want to be able to have more than one WAP server in future, so there’s a load balancer deployed too. You can see the resource outline in the screenshot below:

wap template json

The internet-facing stuff means we need more things in our template. First up is our PublicIPAddress:

{
  "name": "[variables('vmWAPpublicipName')]",
  "type": "Microsoft.Network/publicIPAddresses",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [ ],
  "tags": {
    "displayName": "vmWAPpublicip"
  },
  "properties": {
    "publicIPAllocationMethod": "Dynamic",
    "dnsSettings": {
      "domainNameLabel": "[variables('vmWAPpublicipDnsName')]"
    }
  }
},

This is pretty straightforward stuff. The nature of my environment means that I am perfectly happy with a dynamic IP that changes if I stop and then start the environment. Access will be via the hostname assigned to that IP and I use that hostname in my ADFS service configuration and certificates. Azure builds the hostname based on a pattern and I can use that pattern in my templates, which is how I’ve created the certs when I deploy the DC and configure the ADFS service all before I’ve deployed the WAP server.

That public IP address is then bound to our load balancer which provides the internet-endpoint for our services:

{
  "apiVersion": "2015-05-01-preview",
  "name": "[variables('vmWAPlbName')]",
  "type": "Microsoft.Network/loadBalancers",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"
  ],
  "properties": {
    "frontendIPConfigurations": [
      {
        "name": "[variables('LBFE')]",
        "properties": {
          "publicIPAddress": {
            "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"
          }
        }
      }
    ],
    "backendAddressPools": [
      {
        "name": "[variables('LBBE')]"
      }
    ],
    "inboundNatRules": [
      {
        "name": "[variables('RDPNAT')]",
        "properties": {
          "frontendIPConfiguration": {
            "id": "[variables('vmWAPLbfeConfigID')]"
          },
          "protocol": "tcp",
          "frontendPort": "[variables('rdpPort')]",
          "backendPort": 3389,
          "enableFloatingIP": false
        }
      },
      {
        "name": "[variables('httpsNAT')]",
        "properties": {
          "frontendIPConfiguration": {
            "id": "[variables('vmWAPLbfeConfigID')]"
          },
          "protocol": "tcp",
          "frontendPort": "[variables('httpsPort')]",
          "backendPort": 443,
          "enableFloatingIP": false
        }
      }
    ]
  }
}

There’s a lot going on in here so let’s work through it. First of all we connect our public IP address to the load balancer. We then create a back end configuration which we will later connect our VM to. Finally we create a set of NAT rules. I need to be able to RDP into the WAP server, which is the first block. The variables define the names of my resources. You can see that I specify the ports – external through a variable that I can change, and internal directlym because I need that to be the same each time because that’s what my VMs listen on. You can see that each NAT rule is associated with the frontendIPConfiguration – opening the port to the outside world.

The next step is to create a NIC that will hook our VM up to the existing virtual network and the load balancer:

{
  "name": "[variables('vmWAPNicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/publicIPAddresses/', variables('vmWAPpublicipName'))]",
    "[concat('Microsoft.Network/loadBalancers/',variables('vmWAPlbName'))]"
  ],
  "tags": {
    "displayName": "vmWAPNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[variables('vmWAPIPAddress')]",
          "subnet": {
            "id": "[variables('vmWAPSubnetRef')]"
          },
          "loadBalancerBackendAddressPools": [
            {
              "id": "[variables('vmWAPBEAddressPoolID')]"
            }
          ],
          "loadBalancerInboundNatRules": [
            {
              "id": "[variables('vmWAPRDPNATRuleID')]"
            },
            {
              "id": "[variables('vmWAPhttpsNATRuleID')]"
            }
          ]

        }
      }
    ]
  }
}

Here you can see that the NIC is connected to a subnet on our virtual network with a static IP that I specify in a variable. It is then added to the load balancer back end address pool and finally I need to specify which of the NAT rules I created in the load balancer are hooked up to my VM. If I don’t include the binding here, traffic won’t be passed to my VM (as I discovered when developing this lot – I forgot to wire up https and as a result couldn’t access the website published by WAP!).

The VM itself is basically the same as my ADFS server. I use the same Windows Sever 2012 R2 image, have a single disk and I’ve nested the extensions within the VM because that seems to work better than not doing:

{
  "name": "[variables('vmWAPName')]",
  "type": "Microsoft.Compute/virtualMachines",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/networkInterfaces/', variables('vmWAPNicName'))]",
  ],
  "tags": {
    "displayName": "vmWAP"
  },
  "properties": {
    "hardwareProfile": {
      "vmSize": "[variables('vmWAPVmSize')]"
    },
    "osProfile": {
      "computername": "[variables('vmWAPName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "[variables('windowsImagePublisher')]",
        "offer": "[variables('windowsImageOffer')]",
        "sku": "[variables('windowsImageSKU')]",
        "version": "latest"
      },
      "osDisk": {
        "name": "[concat(variables('vmWAPName'), '-os-disk')]",
        "vhd": {
          "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmWAPName'), 'os.vhd')]"
        },
        "caching": "ReadWrite",
        "createOption": "FromImage"
      }
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmWAPNicName'))]"
        }
      ]
    }
  },
  "resources": [
    {
      "type": "extensions",
      "name": "IaaSDiagnostics",
      "apiVersion": "2015-06-15",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]"
      ],
      "tags": {
        "displayName": "[concat(variables('vmWAPName'),'/vmDiagnostics')]"
      },
      "properties": {
        "publisher": "Microsoft.Azure.Diagnostics",
        "type": "IaaSDiagnostics",
        "typeHandlerVersion": "1.4",
        "autoUpgradeMinorVersion": "true",
        "settings": {
          "xmlCfg": "[base64(variables('wadcfgx'))]",
          "StorageAccount": "[variables('storageAccountName')]"
        },
        "protectedSettings": {
          "storageAccountName": "[variables('storageAccountName')]",
          "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",
          "storageAccountEndPoint": "https://core.windows.net/"
        }
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(variables('vmWAPName'),'/WAPserver')]",
      "apiVersion": "2015-05-01-preview",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[resourceId('Microsoft.Compute/virtualMachines', variables('vmWAPName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/IaaSDiagnostics')]"
      ],
      "properties": {
        "publisher": "Microsoft.Powershell",
        "type": "DSC",
        "typeHandlerVersion": "1.7",
        "settings": {
          "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
          "configurationFunction": "[variables('vmWAPConfigurationFunction')]",
          "properties": {
            "domainName": "[variables('domainName')]",
            "adminCreds": {
              "userName": "[parameters('adminUsername')]",
              "password": "PrivateSettingsRef:adminPassword"
            }
          }
        },
        "protectedSettings": {
          "items": {
            "adminPassword": "[parameters('adminPassword')]"
          }
        }
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(variables('vmWAPName'),'/wapScript')]",
      "apiVersion": "2015-05-01-preview",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/WAPserver')]"

      ],
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.4",
        "settings": {
          "fileUris": [
            "[concat(parameters('_artifactsLocation'),'/WapServer.ps1', parameters('_artifactsLocationSasToken'))]",
            "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
            "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
          ],
          "commandToExecute": "[concat('powershell.exe -file WAPServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -adfsServerName ',variables('vmADFSName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
        }
      }
    }
  ]
}

The DSC and custom script extension are in the same vein as with ADFS. I can get the features on with DSC and then I need to configure stuff with my script.

The DSC Modules

As with the other two servers, the files copied into the VM by the DSC extension are common. I then call the appropriate configuration for the WAP server, held within my common configuration file. The WAP server configuration is shown below:

configuration WAPserver
{
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName xComputerManagement,xActiveDirectory
    
    Node localhost
    {
        WindowsFeature WAPInstall 
        { 
            Ensure = "Present" 
            Name = "Web-Application-Proxy"
        }  
        WindowsFeature WAPMgmt 
        { 
            Ensure = "Present" 
            Name = "RSAT-RemoteAccess"
        }  
        WindowsFeature ADPS
        {
            Name = "RSAT-AD-PowerShell"
            Ensure = "Present"
        } 
        xWaitForADDomain DscForestWait 
        { 
            DomainName = $DomainName 
            DomainUserCredential= $Admincreds
            RetryCount = $RetryCount 
            RetryIntervalSec = $RetryIntervalSec 
            DependsOn = "[WindowsFeature]ADPS"      
        }
        xComputer DomainJoin
        {
            Name = $env:COMPUTERNAME
            DomainName = $DomainName
            Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
            DependsOn = "[xWaitForADDomain]DscForestWait"
        }

        LocalConfigurationManager 
        {
            DebugMode = $true    
            RebootNodeIfNeeded = $true
        }
    }     
}

As with ADFS, the configuration joins the domain and adds the required features for WAP. Note that I install the RSAT tools for Remote Access. If you don’t do this, you can’t configure WAP because the powershell modules aren’t installed!

The Custom Scripts

The WAP script performs much of the same work as the ADFS script. I need to install the certificate for my service, so that’s copied onto the server by the script before it runs an invoke-command block. The main script is run as the local system account and can successfully connect to the DC as the computer account. I then run my invoke-command with domain admin credentials so I can configure WAP, and once inside the invoke-command block network access gets tricky, so I don’t do it!

#
# WapServer.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $adfsServerName,
    $vmDCname,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)

Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "adfsServerName: $adfsServerName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="


    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("WAPserver Script Executed", $info_event, 5001)


    $srcPath = "\\"+ $vmDCname + "\src"
    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $fsCertificateSubject+".pfx"
    $certPath = $srcPath + "\" + $fsCertFileName

    #Copy cert from DC
    write-verbose -Verbose "Copying $certpath to $PSScriptRoot"
#        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}"
#        Write-Verbose -Verbose $powershellCommand
#        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
#        $encodedCommand = [Convert]::ToBase64String($bytes)

#        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
        copy-item $certPath -Destination $PSScriptRoot -Verbose

Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $domainCredential,
        $adfsServerName,
        $fsServiceName,
        $vmDCname,
        $resourceLocation
    )
    # Working variables

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In WAPserver scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)
    
    Import-Module .\tuServDeployFunctions.ps1

    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $workingDir + "\" + $fsCertificateSubject+".pfx"

    Write-Verbose -Verbose "Importing sslcert $fsCertFileName"
    Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword

    $fsIpAddress = (Resolve-DnsName $adfsServerName -type a).ipaddress
    Add-HostsFileEntry -ip $fsIpAddress -domain $fsCertificateSubject


    Set-WapConfiguration -credential $domainCredential -fedServiceName $fsCertificateSubject -certificateSubject $fsCertificateSubject


} -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $adfsServerName, $fsServiceName, $vmDCname, $resourceLocation

The script modifies the HOSTS file on the server so it can find the ADFS service and then configures the Web Application Proxy for that ADFS service. It’s worth mentioning at this point the $fsCertificateSubject, which is also my service name. When we first worked on this environment using the old Azure PowerShell commands the name of the public endpoint was always <something>.cloudapp.net. When I use the new Resource Manager model I discovered that is now <something>.<Azure Location>.cloudapp.azure.com. The <something> is in our control – we specify it. The <Azure Location> isn’t quite, and is the resource location for our deployment (converted to lowercase with no spaces). You’ll find that same line of code in the DC and ADFS scripts and it’s creating the hostname our service will use based on the resource location specified in the template, passed into the script as a parameter.

The functions called by that script are shown below:

function Import-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateFileName,
        $certificatePassword
    )    

        Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"
        Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1

        Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName
        # import it
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Import-PfxCertificate –FilePath ($certificateFileName) cert:\localMachine\my -Password $password

}

function Add-HostsFileEntry
{
[CmdletBinding()]
    param
    (
        $ip,
        $domain
    )

    $hostsFile = "$env:windir\System32\drivers\etc\hosts"
    $newHostEntry = "`t$ip`t$domain";

        if((gc $hostsFile) -contains $NewHostEntry)
        {
            Write-Verbose -Verbose "The hosts file already contains the entry: $newHostEntry.  File not updated.";
        }
        else
        {
            Add-Content -Path $hostsFile -Value $NewHostEntry;
        }
}

function Set-WapConfiguration
{
[CmdletBinding()]
Param(
$credential,
$fedServiceName,
$certificateSubject
)

Write-Verbose -Verbose "Configuring WAP Role"
Write-Verbose -Verbose "---"

    #$certificate = (dir Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject}).thumbprint
    $certificateThumbprint = (get-childitem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint

    # install WAP
    Install-WebApplicationProxy –CertificateThumbprint $certificateThumbprint -FederationServiceName $fedServiceName -FederationServiceTrustCredential $credential

}

What’s Left?

This sequence of posts has talked about Resource Templates and how I structure mine based on my experience of developing and repeatedly deploying a pretty complex environment. It’s also given you specific config advice for doing the same as me: Create a Domain Controller and Certificate Authority, create an ADFS server and publish that server via a Web Application Proxy. If you only copy the stuff so far you’ll have an isolated environment that you can access via the WAP server for remote management.

I’m still working on this, however. I have a SQL server to configure. It turns out that DSC modules for SQL are pretty rich and I’ll blog on those at some point. I am also adding a BizTalk server. I suspect that will involve more on the custom script side. I then need to deploy my application itself, which I haven’t even begun yet (although the guys have created a rich set of automation PowerShell scripts to deal with the deployment).

Overall, I hope you take away from this series of posts just how powerful Azure Resource Templates can bee when pushing out IaaS solutions. I haven’t even touched on the PaaS components of Azure, but they can be dealt with in the same way. The need to learn this stuff is common across IT, Dev and DevOps, and it’s really interesting and fun to work on (if frustrating at times). I strongly encourage you to go play!

Credits

As with the previous posts, stuff I’ve talked about has been derived in part from existing resources:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Three: ADFS Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. This post will focus on the next server in the chain: The ADFS server that is required to enable authentication in the application which will eventually be installed on this environment.

The Template

The nested deployment template for the ADFS server differs little from my DC template. If anything, it’s even simpler because we don’t have to reconfigure the virtual network after deploying the VM. The screenshot below shots the JSON outline for the template.

adfs template json

You can see that it follows the same pattern as the DC template in part two. I have a VM, a NIC that it depends on and which is attached to our virtual network, and I have VM extensions within the VM itself to enable diagnostics, push a DSC configuration to the VM and execute a custom PowerShell script.

I went through the template construction in detail with the DC, so here I’ll simply show the resources code for you. The VM uses the same Windows Server base image as the DC but doesn’t need the extra disk that we attached to the DC.

"resources": [
  {
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
    ],
    "location": "[parameters('resourceLocation')]",
    "name": "[variables('vmADFSNicName')]",
    "properties": {
      "ipConfigurations": [
        {
          "name": "ipconfig1",
          "properties": {
            "privateIPAllocationMethod": "Static",
            "privateIPAddress": "[variables('vmADFSIPAddress')]",
            "subnet": {
              "id": "[variables('vmADFSSubnetRef')]"
            }
          }
        }
      ]
    },
    "tags": {
      "displayName": "vmADFSNic"
    },
    "type": "Microsoft.Network/networkInterfaces"
  },
  {
    "name": "[variables('vmADFSName')]",
    "type": "Microsoft.Compute/virtualMachines",
    "location": "[parameters('resourceLocation')]",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
      "[concat('Microsoft.Network/networkInterfaces/', variables('vmADFSNicName'))]",
    ],
    "tags": {
      "displayName": "vmADFS"
    },
    "properties": {
      "hardwareProfile": {
        "vmSize": "[variables('vmADFSVmSize')]"
      },
      "osProfile": {
        "computername": "[variables('vmADFSName')]",
        "adminUsername": "[parameters('adminUsername')]",
        "adminPassword": "[parameters('adminPassword')]"
      },
      "storageProfile": {
        "imageReference": {
          "publisher": "[variables('windowsImagePublisher')]",
          "offer": "[variables('windowsImageOffer')]",
          "sku": "[variables('windowsImageSKU')]",
          "version": "latest"
        },
        "osDisk": {
          "name": "[concat(variables('vmADFSName'), '-os-disk')]",
          "vhd": {
            "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmADFSName'), 'os.vhd')]"
          },
          "caching": "ReadWrite",
          "createOption": "FromImage"
        }
      },
      "networkProfile": {
        "networkInterfaces": [
          {
            "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmADFSNicName'))]"
          }
        ]
      }
    },
    "resources": [
      {
        "type": "extensions",
        "name": "IaaSDiagnostics",
        "apiVersion": "2015-06-15",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]"
        ],
        "tags": {
          "displayName": "[concat(variables('vmADFSName'),'/vmDiagnostics')]"
        },
        "properties": {
          "publisher": "Microsoft.Azure.Diagnostics",
          "type": "IaaSDiagnostics",
          "typeHandlerVersion": "1.4",
          "autoUpgradeMinorVersion": "true",
          "settings": {
            "xmlCfg": "[base64(variables('wadcfgx'))]",
            "StorageAccount": "[variables('storageAccountName')]"
          },
          "protectedSettings": {
            "storageAccountName": "[variables('storageAccountName')]",
            "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",
            "storageAccountEndPoint": "https://core.windows.net/"
          }
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[concat(variables('vmADFSName'),'/ADFSserver')]",
        "apiVersion": "2015-05-01-preview",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[resourceId('Microsoft.Compute/virtualMachines', variables('vmADFSName'))]",
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/IaaSDiagnostics')]"
        ],
        "properties": {
          "publisher": "Microsoft.Powershell",
          "type": "DSC",
          "typeHandlerVersion": "1.7",
          "settings": {
            "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
            "configurationFunction": "[variables('vmADFSConfigurationFunction')]",
            "properties": {
              "domainName": "[variables('domainName')]",
              "vmDCName": "[variables('vmDCName')]",
              "adminCreds": {
                "userName": "[parameters('adminUsername')]",
                "password": "PrivateSettingsRef:adminPassword"
              }
            }
          },
          "protectedSettings": {
            "items": {
              "adminPassword": "[parameters('adminPassword')]"
            }
          }
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[concat(variables('vmADFSName'),'/adfsScript')]",
        "apiVersion": "2015-05-01-preview",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]",
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/ADFSserver')]"
        ],
        "properties": {
          "publisher": "Microsoft.Compute",
          "type": "CustomScriptExtension",
          "typeHandlerVersion": "1.4",
          "settings": {
            "fileUris": [
              "[concat(parameters('_artifactsLocation'),'/AdfsServer.ps1', parameters('_artifactsLocationSasToken'))]",
              "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
              "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
            ],
            "commandToExecute": "[concat('powershell.exe -file AdfsServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
          }
        }
      }
    ]
  },
  ]

The DSC Modules

All the DSC modules I need get zipped into the same archive file which is deployed by each DSC extension to the VMs. I showed you that in part one. For the ADFS server, the extension calls the configuration module DSCvmConfigs.ps1\\ADFSserver (note the escaped slash) – the ADFSserver configuration within my single DSCvmConfigs.ps1 file that holds all my configurations. As with the DC configuration, this is based on stuff held in the SharePoint farm template on GitHub.

configuration ADFSserver
{
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [String]$vmDCName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName xComputerManagement,xActiveDirectory

    Node localhost
    {
        WindowsFeature ADFSInstall 
        { 
            Ensure = "Present" 
            Name = "ADFS-Federation"
        }  
        WindowsFeature ADPS
        {
            Name = "RSAT-AD-PowerShell"
            Ensure = "Present"

        } 
        xWaitForADDomain DscForestWait 
        { 
            DomainName = $DomainName 
            DomainUserCredential= $Admincreds
            RetryCount = $RetryCount 
            RetryIntervalSec = $RetryIntervalSec 
            DependsOn = "[WindowsFeature]ADPS"      
        }
        xComputer DomainJoin
        {
            Name = $env:COMPUTERNAME
            DomainName = $DomainName
            Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
            DependsOn = "[xWaitForADDomain]DscForestWait"
        }


        LocalConfigurationManager 
        {
            DebugMode = $true    
            RebootNodeIfNeeded = $true
        }
    }     
}

The DSC for my ADFS server does much less than that of the DC. It installs the Windows features I need (the RSAT-AD-PowerShell tools are needed by the xWaitForADDomain config), makes sure our domain is contactable and joins the server to it. Unfortunately there are no DSC resources around to configure our ADFS server at the moment and whilst I’m happy writing scripts to to that work, I’m less comfortable writing DSC modules right now!

The Custom Scripts

Once our DSC extension has joined the domain and added our features, it’s over to the customscript extension to configure the ADFS service. As with the DC, I copy down the script itself, a file with my own functions in and the PSPKI module.

#
# AdfsServer.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $vmDCname,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)

Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("ADFSserver Script Executed", $info_event, 5001)


    $srcPath = "\\"+ $vmDCname + "\src"
    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ",[System.String]::Empty)).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $fsCertificateSubject+".pfx"
    $certPath = $srcPath + "\" + $fsCertFileName

    #Copy cert from DC
    write-verbose -Verbose "Copying $certpath to $PSScriptRoot"
#        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}"
#        Write-Verbose -Verbose $powershellCommand
#        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
#        $encodedCommand = [Convert]::ToBase64String($bytes)

#        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
        copy-item $certPath -Destination $PSScriptRoot -Verbose

Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $domainCredential,
        $fsServiceName,
        $vmDCname,
        $resourceLocation
    )
    # Working variables

    Write-Verbose -Verbose "Entering ADFS Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In ADFSserver scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)
    
    Write-Verbose -Verbose "Importing PSPKI"
    Import-Module .\tuServDeployFunctions.ps1


    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $workingDir + "\" + $fsCertificateSubject+".pfx"

    Write-Verbose -Verbose "Importing sslcert $fsCertFileName"
    Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword


    $adfsServiceAccount = $env:USERDOMAIN+"\"+"svc_adfs"
    $adfsPassword = ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force 
    $adfsCredentials = New-Object System.Management.Automation.PSCredential ($adfsServiceAccount, $adfsPassword) 
    $adfsDisplayName = "ADFS Service"

    Write-Verbose -Verbose "Creating ADFS Farm"
    Create-ADFSFarm -domainCredential $domainCredential -adfsName $fsCertificateSubject -adfsDisplayName $adfsDisplayName -adfsCredentials $adfsCredentials -certificateSubject $fsCertificateSubject


} -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $fsServiceName, $vmDCname, $resourceLocation

The script starts by copying the certificate files from the DC. The script extension shells the script as the local system account, so it connects to the share on the DC as the computer account. I copy the files before I execute an invoke-command block that run as the domain admin. I do this because once I’m in that invoke-command block, network access becomes a real pain!

As you can see, this script doesn’t do a huge amount. Once in the invoke-command it unzips the PSPKI modules, imports the certificate it needs into the computer cert store and then calls a function to configure the ADFS service. The functions called by the script are below:

function Import-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateFileName,
        $certificatePassword
    )    

        Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"
        Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1

        Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName
        # import it
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Import-PfxCertificate –FilePath ($certificateFileName) cert:\localMachine\my -Password $password

}

function Create-ADFSFarm
{
[CmdletBinding()]
param
(
$domainCredential,
$adfsName, 
$adfsDisplayName, 
$adfsCredentials,
$certificateSubject
)

    Write-Verbose -Verbose "In Function Create-ADFS Farm"
    Write-Verbose -Verbose "Parameters:"
    Write-Verbose -Verbose "adfsName: $adfsName"
    Write-Verbose -Verbose "certificateSubject: $certificateSubject"
    Write-Verbose -Verbose "adfsDisplayName: $adfsDisplayName"
    Write-Verbose -Verbose "adfsCredentials: $adfsCredentials"
    Write-Verbose -Verbose "============================================"

    Write-Verbose -Verbose "Importing Module"
    Import-Module ADFS
    Write-Verbose -Verbose "Getting Thumbprint"
    $certificateThumbprint = (get-childitem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint
    Write-Verbose -Verbose "Thumprint is $certificateThumbprint"
    Write-Verbose -Verbose "Install ADFS Farm"

    Write-Verbose -Verbose "Echo command:"
    Write-Verbose -Verbose "Install-AdfsFarm -credential $domainCredential -CertificateThumbprint $certificateThumbprint -FederationServiceDisplayName '$adfsDisplayName' -FederationServiceName $adfsName -ServiceAccountCredential $adfsCredentials"
    Install-AdfsFarm -credential $domainCredential -CertificateThumbprint $certificateThumbprint -FederationServiceDisplayName "$adfsDisplayName" -FederationServiceName $adfsName -ServiceAccountCredential $adfsCredentials -OverwriteConfiguration

}
There’s still stuff do do on the ADFS server once I get to deploying my application: I need to define relying party trusts and custom claims, for example. However, this deployment creates a working ADFS server that will authenticate users against my domain. It’s then published to the outside world safely by the Web Application Proxy role on my WAP server.

Credit Where It’s Due

Same as before – I stand on the shoulders of others to bring you this stuff:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Two: Domain Controller

In part one of this series of posts I talked about the project driving my creation of these Azure Resource Templates, the structure of the template and what resource I was deploying. This post will go through the deployment and configuration of the first VM which will become my domain controller and certificate server. In order to achieve my goals I need to deploy the VM, the DSC extension and finally the custom script extension to perform actions that current DSC modules can’t. I’ll show you the template code, the DSC code and the final scripts and talk about the gotchas I encountered on the way.

Further posts will detail the ADFS and WAP server deployments.

The Template

I’ve already talked about how I’ve structured this project: A core template calls a collection of nested templates – one per VM. The DC template differs from the rest in that it too calls a nested deployment to make changes to my virtual network. Other than that, it follows the same convention.

dc template json view_thumb[2]

The screenshot above is the JSON outline view of the template. Each of my nested VM templates follows the same pattern: The parameters block in each template is exactly the same. I’m using a standard convention for naming all my resources, so providing I pass the envPrefix parameter between each one I can calculate the name of any resource in the project. That’s important, as we’ll see in a moment. The variables block contains all the variables that the current template needs – things like the IP address that should be assigned or the image we use as our base for the VM. Finally, the resources section holds the items we are deploying to create the domain controller. This VM is isolated from the outside world so we need the VM itself and a NIC to connect it to our virtual network, nothing more. The network is created by the core template before it calls the DC template.

The nested deployment needs explaining. Once we’ve created our domain controller we need to make sure that all our other VMs receive the correct IP address for their DNS. In order to do that we have to reconfigure the virtual network that we have already deployed. The nested deployment here is an artefact of the original approach with a single template – it could actually be fully contained in the DC template.

To explain: We can only define a resource with a given type and name in a template once. Templates are declarative and describe how we want a resource to be configured. With our virtual network we want to reconfigure it after we have deployed subsequent resources. If we describe the network for a second time, the new configuration is applied to our existing resource. The problem is that we have already got a resource in our template for our network. We get around the problem by calling a nested deployment. That deployment is a copy of the network configuration, with the differences we need for our reconfiguration. In my original template which contained all the resources, that nested deployment depended on the DC being deployed and was then called. It had to be a nested deployment because the network was already in there once.

With my new model I could actually just include the contents of the network reconfiguration deployment directly in the DC template. I am still calling the nested resource simply because of the way I split my original template. The end result is the same. The VM gets created, then the DSC and script extensions run to turn it into a domain controller. The network template is then called to set the DNS IP configuration of the network to be the IP address of the newly-minted DC.

{
  "name": "tuServUpdateVnet",
  "type": "Microsoft.Resources/deployments",
  "apiVersion": "2015-01-01",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/dcScript')]"
  ],
  "properties": {
    "mode": "Incremental",
    "templateLink": {
      "uri": "[concat(variables('updateVNetDNSTemplateURL'), parameters('_artifactsLocationSasToken'))]",
      "contentVersion": "1.0.0.0"
    },
    "parameters": {
      "resourceLocation": { "value": "[parameters('resourceLocation')]" },
      "virtualNetworkName": { "value": "[variables('virtualNetworkName')]" },
      "virtualNetworkPrefix": { "value": "[variables('virtualNetworkPrefix')]" },
      "virtualNetworkSubnet1Name": { "value": "[variables('virtualNetworkSubnet1Name')]" },
      "virtualNetworkSubnet1Prefix": { "value": "[variables('virtualNetworkSubnet1Prefix')]" },
      "virtualNetworkDNS": { "value": [ "[variables('vmDCIPAddress')]" ] }
    }
  }
}

The code above is contained in my DC template. It calls the nested deployment through a URI to the template. That points to an azure storage container with all the resources for my deployment held in it. The template is called with a set of parameters that are mostly variables created in the DC template in accordance with the rules and patterns I’ve set. Everything is the same as the original network deployment with the exception of the DNS address which is to be set to the DC address. Below is the network template. Note that the parameter block defines parameters that match those being passed in. All names are case sensitive.

{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourceLocation": {
            "type": "string",
            "defaultValue": "West US",
            "allowedValues": [
                "East US",
                "West US",
                "West Europe",
                "North Europe",
                "East Asia",
                "South East Asia"
            ],
            "metadata": {
                "description": "The region to deploy the storage resources into"
            }
        },
        "virtualNetworkName": {
            "type": "string"
        },
        "virtualNetworkDNS": {
            "type": "array"
        },
        "virtualNetworkPrefix": {
            "type": "string"
        },
        "virtualNetworkSubnet1Name": {
            "type": "string"
        },
        "virtualNetworkSubnet1Prefix": {
            "type": "string"
        }
    },
        "variables": {
        },
        "resources": [
            {
                "name": "[parameters('virtualNetworkName')]",
                "type": "Microsoft.Network/virtualNetworks",
                "location": "[parameters('resourceLocation')]",
                "apiVersion": "2015-05-01-preview",
                "tags": {
                    "displayName": "virtualNetworkUpdate"
                },
                "properties": {
                    "addressSpace": {
                        "addressPrefixes": [
                            "[parameters('virtualNetworkPrefix')]"
                        ]
                    },
                    "dhcpOptions": {
                        "dnsServers": "[parameters('virtualNetworkDNS')]"
                    },

                    "subnets": [
                        {
                            "name": "[parameters('virtualNetworkSubnet1Name')]",
                            "properties": {
                                "addressPrefix": "[parameters('virtualNetworkSubnet1Prefix')]"
                            }
                        }
                    ]
                }
            }
        ],
        "outputs": {
        }
    }

The VM itself is pretty straightforward. The code below deploys a virtual NIC and then the VM. The NIC needs to be created first and is then bound to the VM when the latter is deployed. The snippet has the nested resources for the VM extensions removed. I’ll show you those in a bit.

{
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "location": "[parameters('resourceLocation')]",
  "name": "[variables('vmDCNicName')]",
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[variables('vmDCIPAddress')]",
          "subnet": {
            "id": "[variables('vmDCSubnetRef')]"
          }
        }
      }
    ]
  },
  "tags": {
    "displayName": "vmDCNic"
  },
  "type": "Microsoft.Network/networkInterfaces"
},
{
  "name": "[variables('vmDCName')]",
  "type": "Microsoft.Compute/virtualMachines",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/networkInterfaces/', variables('vmDCNicName'))]"
  ],
  "tags": {
    "displayName": "vmDC"
  },
  "properties": {
    "hardwareProfile": {
      "vmSize": "[variables('vmDCVmSize')]"
    },
    "osProfile": {
      "computername": "[variables('vmDCName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "[variables('windowsImagePublisher')]",
        "offer": "[variables('windowsImageOffer')]",
        "sku": "[variables('windowsImageSKU')]",
        "version": "latest"
      },
      "osDisk": {
        "name": "[concat(variables('vmDCName'), '-os-disk')]",
        "vhd": {
          "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmDCName'), 'os.vhd')]"
        },
        "caching": "ReadWrite",
        "createOption": "FromImage"
      },
      "dataDisks": [
        {
          "vhd": {
            "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'),'/', variables('vmDCName'),'data-1.vhd')]"
          },
          "name": "[concat(variables('vmDCName'),'datadisk1')]",
          "createOption": "empty",
          "caching": "None",
          "diskSizeGB": "[variables('windowsDiskSize')]",
          "lun": 0
        }
      ]
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmDCNicName'))]"
        }
      ]
    }
  },
  "resources": [
  ]
}

The NIC is pretty simple. I tell it the name of the subnet on my network I want it to connect to and I tell it that I want to use a static private IP address, and what that address is. The VM resource then references the NIC in the networkProfile section.

The VM itself is built using the Windows Server 2012 R2 Datacentre image provided by Microsoft. That is specified in the imageReference section. There are lots of VM images and each is reference by publisher (in this case MicrosoftWindowsServer), offer (WindowsServer) and SKU (2012-R2-Datacenter). I’m specifying ‘latest’ as the version but you can be specific if you have built your deployment around a specific version of an image. They are updated regularly to include patches… There are a wide range of images available to save you time. My full deployment makes use of a SQL Server image and I’m also playing with a BizTalk image right now. It’s much easier than trying to sort out the install of products yourself, and the licence cost of the software gets rolled into the VM charge.

We need to add a second disk to our VM to hold the domain databases. The primary disk on a VM has read and write caching enabled. Write caching exposes us to risk of corrupting our domain database in the event of a failure, so I’m adding a second disk and setting the caching on that to none. It’s all standard stuff at this point.

I’m not going to describe the IaaSDiagnostics extension. The markup for that is completely default as provided by the tooling when you add the resource. Let’s move on to the DSC extension.

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('vmDCName'),'/InstallDomainController')]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[resourceId('Microsoft.Compute/virtualMachines', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/IaaSDiagnostics')]"
  ],
  "properties": {
    "publisher": "Microsoft.Powershell",
    "type": "DSC",
    "typeHandlerVersion": "1.7",
    "settings": {
      "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
      "configurationFunction": "[variables('vmDCConfigurationFunction')]",
      "properties": {
        "domainName": "[variables('domainName')]",
        "adminCreds": {
          "userName": "[parameters('adminUsername')]",
          "password": "PrivateSettingsRef:adminPassword"
        }
      }
    },
    "protectedSettings": {
      "items": {
        "adminPassword": "[parameters('adminPassword')]"
      }
    }
  }
}

I should mention at this point that I am nesting the extensions within the VM resources section. You don’t need to do this – they can be resources at the same level as the VM. However, my experience from deploying this lot a gazillion times is that if I nest the extensions I get a more robust deployment. Pulling them out of the VM appears to increase the chance of the extension failing to deploy.

The DSC extension will do different things depending on the OS version of Windows you are using. For my 2012 R2 VM it will install the necessary required software to use Desired State Configuration and it will then reboot the VM before applying any config. On the current Server 2016 preview images that installation and reboot isn’t needed as the pre-reqs are already installed.

The DSC extension needs to copy your DSC modules and configuration onto the VM. That’s specified in the modulesURL setting and it expects a zip archive with your stuff in it. I’ll show you that when we look at the DSC config in detail later. The configurationFunction setting specifies the PowerShell file that contains the function and the name of the configuration in that file to use. I have all the DSC configs in one file so I pass in DSCvmConfigs.ps1\\DomainController (note the escaped slash).

Finally, we specify the parameters that we want to pass into our PowerShell DSC function. We’re specifying the name of our Domain and the credentials for our admin account.

Once the DSC module has completed I need to do final configuration with standard PowerShell scripts. The customScript Extension is our friend here. Documentation on this is somewhat sparse and I’ve already blogged on the subject to help you. The template code is below:

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('vmDCName'),'/dcScript')]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/InstallDomainController')]"
  ],
  "properties": {
    "publisher": "Microsoft.Compute",
    "type": "CustomScriptExtension",
    "typeHandlerVersion": "1.4",
    "settings": {
      "fileUris": [
        "[concat(parameters('_artifactsLocation'),'/DomainController.ps1', parameters('_artifactsLocationSasToken'))]",
        "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
        "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
      ],
      "commandToExecute": "[concat('powershell.exe -file DomainController.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -tsServiceName ',variables('vmTWAPpublicipDnsName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
    }
  }
}

The module downloads the files I need which in this case is a zip containing the PSPKI PowerShell modules that I reference to perform a bunch of certificate functions, a module of my own functions and finally the DomainController.ps1 script that is executed by the extension. You can’t specify parameters for your script in the extension (and in fact you can’t call the script directly – you have to execute the powershell.exe command yourself) so you can see that I build the commandToExecute from using a bunch of variables and string concatenation.

The DSC Modules

I need to get the DSC modules I use onto the VM. To save my going mad, that means I include the module source in the Visual Studio solution. Over time I’ve evolved a folder structure within the solution to separate templates, DSC files and script files. You can see this structure in the screenshot below.

dsc modules_thumb[2]

I keep all the DSC together like this because I can then simply zip all the files in the DSC folder structure to give me the archive that is deployed by the DSC extension. In the picture you will see that there are a number of .ps1 files in the root. Originally I created separate files for the DSC configuration of each of my VMs. I then collapsed those into the DSCvmConfigs.ps1 files and I simply haven’t removed the others from the project.

My DomainController configuration function began life as the example code from the three server SharePoint template on Github and I have since extended and modified it. The code is shown below:

configuration DomainController 
{ 
   param 
   ( 
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [String]$DomainNetbiosName=(Get-NetBIOSName -DomainName $DomainName),
        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    ) 
    
    Import-DscResource -ModuleName xComputerManagement, cDisk, xDisk, xNetworking, xActiveDirectory, xSmbShare, xAdcsDeployment
    [System.Management.Automation.PSCredential ]$DomainCreds = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
    $Interface=Get-NetAdapter|Where Name -Like "Ethernet*"|Select-Object -First 1
    $InteraceAlias=$($Interface.Name)

    Node localhost
    {
        WindowsFeature DNS 
        { 
            Ensure = "Present" 
            Name = "DNS"
        }
        xDnsServerAddress DnsServerAddress 
        { 
            Address        = '127.0.0.1' 
            InterfaceAlias = $InteraceAlias
            AddressFamily  = 'IPv4'
        }
        xWaitforDisk Disk2
        {
             DiskNumber = 2
             RetryIntervalSec =$RetryIntervalSec
             RetryCount = $RetryCount
        }
        cDiskNoRestart ADDataDisk
        {
            DiskNumber = 2
            DriveLetter = "F"
        }
        WindowsFeature ADDSInstall 
        { 
            Ensure = "Present" 
            Name = "AD-Domain-Services"
        }  
        xADDomain FirstDS 
        {
            DomainName = $DomainName
            DomainAdministratorCredential = $DomainCreds
            SafemodeAdministratorPassword = $DomainCreds
            DatabasePath = "F:\NTDS"
            LogPath = "F:\NTDS"
            SysvolPath = "F:\SYSVOL"
        }
        WindowsFeature ADCS-Cert-Authority
        {
               Ensure = 'Present'
               Name = 'ADCS-Cert-Authority'
               DependsOn = '[xADDomain]FirstDS'
        }
        WindowsFeature RSAT-ADCS-Mgmt
        {
               Ensure = 'Present'
               Name = 'RSAT-ADCS-Mgmt'
               DependsOn = '[xADDomain]FirstDS'
        }
        File SrcFolder
        {
            DestinationPath = "C:\src"
            Type = "Directory"
            Ensure = "Present"
            DependsOn = "[xADDomain]FirstDS"
        }
        xSmbShare SrcShare
        {
            Ensure = "Present"
            Name = "src"
            Path = "C:\src"
            FullAccess = @("Domain Admins","Domain Computers")
            ReadAccess = "Authenticated Users"
            DependsOn = "[File]SrcFolder"
        }
        xADCSCertificationAuthority ADCS
        {
            Ensure = 'Present'
            Credential = $DomainCreds
            CAType = 'EnterpriseRootCA'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'              
        }
        WindowsFeature ADCS-Web-Enrollment
        {
            Ensure = 'Present'
            Name = 'ADCS-Web-Enrollment'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'
        }
        xADCSWebEnrollment CertSrv
        {
            Ensure = 'Present'
            Name = 'CertSrv'
            Credential = $DomainCreds
            DependsOn = '[WindowsFeature]ADCS-Web-Enrollment','[xADCSCertificationAuthority]ADCS'
        } 
         
        LocalConfigurationManager 
        {
            DebugMode = $true
            RebootNodeIfNeeded = $true
        }
   }
} 

The .ps1 file contains all the DSC configurations for my environment. The DomainController configuration starts with a list of parameters. These match the ones being passed in by the DSC extension, or have default or calculated values. The import-dscresource command specifies the DSC modules that the configuration needs. I have to ensure that any I am using are included in the zip files downloaded by the extension. I am using modules that configure disks, network shares, active directory domains and certificate services.

The node section then declares my configuration. You can set configurations for multiple hosts in a single DSC configuration block, but I’m only concerned with the host I’m on – localhost. Within the block I then declare what I want the configuration of the host to be. It’s the job of the DSC modules to apply whatever actions are necessary to set the configuration to that which I specify. Just like in our resource template, DSC settings can depend on one another if something needs to be done before something else.

This DSC configuration installs the windows features needed for creating a domain controller. It looks for the additional drive on the VM and assigns it the drive letter F. It creates the new Active Directory domain and places the domain database files on drive F. Once the domain is up and running I create a folder on drive C called src and share that folder. I’m doing that because I create two certificates later and I need to make them available to other machines in the domain. More on that in a bit. Finally, we install the certificate services features and configure a certificate authority. The LocalConfigurationManager settings turn on as much debug output as I can and tell the system that if any of the actions in my config demand a reboot that’s OK – restart as and when required rather than waiting until the end.

I’d love to do all my configuration with DSC but sadly there just aren’t the modules yet. There are some things I just can’t do, like creating a new certificate template in my CA and then generating some specific templates for my ADFS services that are on other VMs. I also can’t set file rights on a folder, although I can set rights on a share. Notice that I grant access to my share to Domain Computers. Both the DSC modules and the custom script extension command are run as the local system account. When I try to read files over the network that means I am connecting to the share as the Computer account and I need to grant access. When I create the DC there are no other VMs in the domain, so I use the Domain Computers group to make sure all my servers will be able to access the files.

Once the DC module completes I have a working domain with a certificate authority.

The Custom Scripts

As with my DSC modules, I keep all the custom scripts for my VMs in one folder within the solution. All of these need to be uploaded to Azure storage so I can access them with the extension and copy them to my VMs. The screenshot below shows the files in the solution. I have a script for each VM that needs one, which is executed by the extension. I then have a file of shared functions and a zip with supporting modules that I need.

custom scripts_thumb[2]

#
# DomainController.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $tsServiceName,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)
Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "tsServiceName: $tsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("DomainController Script Executed", $info_event, 5001)


Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $fsServiceName,
        $tsServiceName,
        $resourceLocation
    )
    # Working variables
    $serviceAccountOU = "Service Accounts"
    Write-Verbose -Verbose "Entering Domain Controller Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "tsServiceName: $tsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="


    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In DomainController scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)

    Import-Module .\tuServDeployFunctions.ps1

    #Enable CredSSP in server role for delegated credentials
    Enable-WSManCredSSP -Role Server –Force

    #Create OU for service accounts, computer group; create service accounts
    Add-ADServiceAccounts -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU -password $vmAdminPassword
    Add-ADComputerGroup -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU
    Add-ADComputerGroupMember -group "tuServ Computers" -member ($env:COMPUTERNAME + '$')

    #Create new web server cert template
    $certificateTemplate = ($env:USERDOMAIN + "_WebServer")
    Generate-NewCertificateTemplate -certificateTemplateName $certificateTemplate -certificateSourceTemplateName "WebServer"
    Set-tsCertificateTemplateAcl -certificateTemplate $certificateTemplate -computers "tuServComputers"

    # Generate SSL Certificates

    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    Generate-SSLCertificate -certificateSubject $fsCertificateSubject -certificateTemplate $certificateTemplate
    $tsCertificateSubject = $tsServiceName + ".northeurope.cloudapp.azure.com"
    Generate-SSLCertificate -certificateSubject $tsCertificateSubject -certificateTemplate $certificateTemplate

    # Export Certificates
    $fsCertExportFileName = $fsCertificateSubject+".pfx"
    $fsCertExportFile = $workingDir+"\"+$fsCertExportFileName
    Export-SSLCertificate -certificateSubject $fsCertificateSubject -certificateExportFile $fsCertExportFile -certificatePassword $vmAdminPassword
    $tsCertExportFileName = $tsCertificateSubject+".pfx"
    $tsCertExportFile = $workingDir+"\"+$tsCertExportFileName
    Export-SSLCertificate -certificateSubject $tsCertificateSubject -certificateExportFile $tsCertExportFile -certificatePassword $vmAdminPassword

    #Set permissions on the src folder
    $acl = Get-Acl c:\src
    $acl.SetAccessRuleProtection($True, $True)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain Computers","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Authenticated Users","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    Set-Acl c:\src $acl


    #Create src folder to store shared files and copy certs to it
    Copy-Item -Path "$workingDir\*.pfx" c:\src

} -ArgumentList $PSScriptRoot, $vmAdminPassword, $fsServiceName, $tsServiceName, $resourceLocation

The domain controller script is shown above. There are a whole bunch of write-verbose commands that output debug which I can see through the Azure Resource Explorer as the script runs.

Pretty much the first thing I do here is an invoke-command. The script is running as local system and there’s not much I can actually do as that account. My invoke-command block runs as the domain administrator so I can get stuff done. Worth noting is that the invoke-command approach makes accessing network resources tricky. It’s not an issue here but it bit me with the ADFS and WAP servers.

I unzip the PSPKI archive that has been copied onto the server and load the modules therein. The files are downloaded to a folder that is in a structure including the version number of the script extension so I can’t be explicit. Fortunately I can use the $PSScriptRoot variable to workout that location and I pass it into the invoke-command as $workingDir. The PSKPI modules allow me to create a new certificate template on my CA so I can generate new certs with exportable private keys. I need the same certs on more than one of my servers so I need to be able to copy them around. I generate the certs and drop them into the src folder I created with DSC. I also set the rights on that src folder to grant Domain Computers and Authenticated Users access. The latter is probably overdoing it, since the former should do what I need, but I spent a good deal of time being stymied by this so I’m taking a belt and braces approach.

The key functions called by the script above are shown below. Held in my modules file, these are all focused on certificate functions and pretty much all depend on the PSPKI modules.

function Generate-NewCertificateTemplate
{
    [CmdletBinding()]
    # note can only be run on the server with PSPKI eg the ActiveDirectory domain controller
    param
    (
        $certificateTemplateName,
        $certificateSourceTemplateName        
    )

    Write-Verbose -Verbose "Generating New Certificate Template" 

        Import-Module .\PSPKI\pspki.psm1
        
        $certificateCnName = "CN="+$certificateTemplateName

        $ConfigContext = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext 
        $ADSI = [ADSI]"LDAP://CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext" 

        $NewTempl = $ADSI.Create("pKICertificateTemplate", $certificateCnName) 
        $NewTempl.put("distinguishedName","$certificateCnName,CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext") 

        $NewTempl.put("flags","66113")
        $NewTempl.put("displayName",$certificateTemplateName)
        $NewTempl.put("revision","4")
        $NewTempl.put("pKIDefaultKeySpec","1")
        $NewTempl.SetInfo()

        $NewTempl.put("pKIMaxIssuingDepth","0")
        $NewTempl.put("pKICriticalExtensions","2.5.29.15")
        $NewTempl.put("pKIExtendedKeyUsage","1.3.6.1.5.5.7.3.1")
        $NewTempl.put("pKIDefaultCSPs","2,Microsoft DH SChannel Cryptographic Provider, 1,Microsoft RSA SChannel Cryptographic Provider")
        $NewTempl.put("msPKI-RA-Signature","0")
        $NewTempl.put("msPKI-Enrollment-Flag","0")
        $NewTempl.put("msPKI-Private-Key-Flag","16842768")
        $NewTempl.put("msPKI-Certificate-Name-Flag","1")
        $NewTempl.put("msPKI-Minimal-Key-Size","2048")
        $NewTempl.put("msPKI-Template-Schema-Version","2")
        $NewTempl.put("msPKI-Template-Minor-Revision","2")
        $NewTempl.put("msPKI-Cert-Template-OID","1.3.6.1.4.1.311.21.8.287972.12774745.2574475.3035268.16494477.77.11347877.1740361")
        $NewTempl.put("msPKI-Certificate-Application-Policy","1.3.6.1.5.5.7.3.1")
        $NewTempl.SetInfo()

        $WATempl = $ADSI.psbase.children | where {$_.Name -eq $certificateSourceTemplateName}
        $NewTempl.pKIKeyUsage = $WATempl.pKIKeyUsage
        $NewTempl.pKIExpirationPeriod = $WATempl.pKIExpirationPeriod
        $NewTempl.pKIOverlapPeriod = $WATempl.pKIOverlapPeriod
        $NewTempl.SetInfo()
        
        $certTemplate = Get-CertificateTemplate -Name $certificateTemplateName
        Get-CertificationAuthority | Get-CATemplate | Add-CATemplate -Template $certTemplate | Set-CATemplate
}

function Set-tsCertificateTemplateAcl
{
    [CmdletBinding()]
    param
    (
    $certificateTemplate,
    $computers
    )

    Write-Verbose -Verbose "Setting ACL for cert $certificateTemplate to allow $computers"
    Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1
        
        Write-Verbose -Verbose "Adding group $computers to acl for cert $certificateTemplate"
        Get-CertificateTemplate -Name $certificateTemplate | Get-CertificateTemplateAcl | Add-CertificateTemplateAcl -User $computers -AccessType Allow -AccessMask Read, Enroll | Set-CertificateTemplateAcl

}

function Generate-SSLCertificate
{
    [CmdletBinding()]
    param
    (
    $certificateSubject,
    $certificateTemplate
    )

    Write-Verbose -Verbose "Creating SSL cert using $certificateTemplate for $certificateSubject"
    Write-Verbose -Verbose "---"
    
    Import-Module .\PSPKI\pspki.psm1

    Write-Verbose -Verbose "Generating Certificate (Single)"
        $certificateSubjectCN = "CN=" + $certificateSubject
        # Version #1
        $powershellCommand = "& {get-certificate -Template " + $certificateTemplate + " -CertStoreLocation Cert:\LocalMachine\My -DnsName " + $certificateSubject + " -SubjectName " + $certificateSubjectCN + " -Url ldap:}"
        Write-Verbose -Verbose $powershellCommand
        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
        $encodedCommand = [Convert]::ToBase64String($bytes)

        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
}

function Export-SSLCertificate
{
    [CmdletBinding()]
    param
    (
    $certificateSubject,
    $certificateExportFile,
    $certificatePassword
    )

    Write-Verbose -Verbose "Exporting cert $certificateSubject to $certificateExportFile with password $certificatePassword"
    Write-Verbose -Verbose "---"

    Import-Module .\PSPKI\pspki.psm1

    Write-Verbose -Verbose "Exporting Certificate (Single)"
    
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Get-ChildItem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject -and $_.Subject -ne $_.Issuer} | Export-PfxCertificate -FilePath $certificateExportFile -Password $password

}

Making sure it’s reusable

One of the things I’m trying to do here is create a collection of reusable configurations. I can take my DC virtual machine config and make it the core of any number of deployments in future. Key stuff like domain names and machine names are always parameterised all the way through template, DSC and scripts. When Azure Stack arrives I should be able to use the same configuration on-prem and in Azure itself and we can use the same building blocks for any number of customer projects, even though it was originally built for an internal project.

There’s stuff I need to do here: I need to pull the vNet template directly into the DC template – there’s no need for it to be separate; I could do with trimming back some of the access rights I grant on the folders and shares that are unnecessary; you’ll also notice that I am configuring CredSSP which was part of my original attempt to sort out file access from within the invoke-command blocks and failed miserably.

A quick round of credits

Whilst most of this work has been myself, bashing my head against the desk for a while, it is built upon code created by other people who need to be referenced:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Speaking at CloudBurst in September

I’ve never been to Sweden, so I’m really looking forward to September, when I’ll be speaking at CloudBurst. Organised by the Swedish Azure User Group (SWAG – love it!), this conference is also streamed and recorded and the sessions will be available on Channel 9. The list of speakers and topics promise some high-quality and interesting sessions and I urge you to attend if you can, and tune in to the live stream if you can’t.

I’ll be spending an hour telling you about Azure Resource Templates: What they are, why you should use them, and I’ll show the work I’ve been doing as an example of a complex deployment.

Complex Azure Template Odyssey Part One: The Environment

Part Two | Part Three | Part Four

Over the past month or two I’ve been creating an Azure Resource Template to deploy and environment which, previously, we’d created old-style PowerShell scripts to deploy. In theory, the Resource Template approach would make the deployment quicker, easier to trigger from tooling like Release Manager and make the code easier to read.

The aim is to deploy a number of servers that will host an application we are developing. This will allow us to easily provision test or demo environments into Azure making as much use of automation as possible. The application itself has a set of system requirements that means I have a good number of tasks to work through:

  1. We need our servers to be domain joined so we can manage security, service accounts etc.
  2. The application uses ADFS for authentication. You don’t just expose ADFS to the internet, so that means we need a Web Application Proxy (WAP) server too.
  3. ADFS, WAP and our application need to use secure connections. We want to be able to deploy lots of these, so things like hostnames and FQDNs for services need to be flexible. That means using our own Certificate Services which we need to deploy.
  4. We need a SQL server for our application’s data. We’ll need some additional drives on this to store data and we need to make sure our service accounts have appropriate access.
  5. Our application is hosted in IIS, so we need a web server as well.
  6. Only servers that host internet-accessible services will get public IP addresses.

We already had scripts to do this the old way. I planned to reuse some of that code, and follow the decisions we made around the environment:

  • All VMs would use standard naming, with an environment-specific prefix. The same prefix would be used for other resources. For example, a prefix of env1 means the storage account is env1storage, the network is env1vnet, the Domain Controller VM is env1dc, etc. The AD domain we created would use the prefix is it’s named (so env1.local).
  • All public IPs would use the Azure-assigned DNS name for our services – no corporate DNS. The prefix would be used in conjunction with role when specifying the name for the cloud service.
  • DSC would be used wherever possible. After that, custom PowerShell scripts would be used. The aim was to configure each machine individually and not use remote PowerShell between servers unless absolutely necessary.

We’d also hit a few problems when creating the old approach, so I hoped to reuse the same solutions:

  • There is very little PowerShell to manage certificate services and certificates. There is an incredibly useful set of modules known as PSPKI which we utilise to create certificate templates and cert requests. This would need to be used in conjunction with our own custom scripts, so it had to be deployed to the VMs somehow.

Azure Resources In The Deployment

Things have actually moved on in terms of the servers I am now deploying (only to get more complex!) but it’s easier to detail the environment as originally planned and successfully deployed.

  • Storage Account. Needed for the hard drives of the multiple virtual machines.
  • Virtual Network. A single subnet for all VMs.
  • Domain Controller
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively. It will add the ADDS and ADCS roles and create the domain.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration. It will create certificate templates and generate certs for services.
  • ADFS Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Add the ADFS role and domain-join the VM.
      • CustomScriptExtension. Will configure ADFS – copying the cert from the DC and creating the federation service.
  • WAP Server
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the WAP service to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Copy the cert from the DC and configure WAP to publish the federation service hosted on the ADFS server.
  • SQL Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual two hard disks to store DBs and logs.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. It turns out the DSC for SQL is pretty good. We can do lots of configuration with it, to the extent of not needing the custom script extension.
  • Web Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.
  • WAP Server 2
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the web server-hosted services to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.

Environment-specific Settings

The nice thing about having a cookie-cutter environment is that there are very few things that will vary between deployments and that means very few parameters in our template. We will need to set the prefix for all our resource names, the location for our resources, and because we want to be flexible we will set the admin username and password.

An Immediate Issue: Network Configuration

Right from the gate we have a problem to solve. When you create a virtual network in Azure, it provides IP addresses to the VMs attached to it. As part of that, the DNS server address is given to the VMs. By default that is an Azure DNS service that allows VMs to resolve external domain names. Our environment will need the servers to be told the IP address of the domain controller as it will provide local DNS services essential to the working of the Active Directory Domain. In our old scripts we simple reconfigured the network to specify the DC’s address after we configured the DC.

In the Resource Template world we can reconfigure the vNet by applying new settings to the resource from our template. However, once we have created the vNet in our template we can’t have another resource with the same name in the same template. The solution is to create another template with our new settings and to call that from our main template as a nested deployment. We can pass the IP address of the DC into that template as a parameter and we can make the nested deployment depend on the DC being deployed, which means it will happen after the DC has been promoted to be the domain controller.

Deployment Order

One of the nicest things about Resource Templates is that when you trigger a deployment, the Azure Resource Manager parses your template and tries to deploy the resources as efficiently as possible. If you need things to deploy in a sequence you need to specify dependencies in your resources, otherwise they will all deploy in parallel.

In this environment, I need to deploy the storage account and virtual network before any of the VMs. They don’t depend on each other however, so can be pushed out first, in parallel.

The DC gets deployed next. I need to fully configure this before any other VMs are created because they need to join our domain, and our network has to be reconfigured to hand out the IP address of the DC.

Once the DC is done, the network gets reconfigured with a nested deployment.

In theory, we should be able to deploy all our other VMs in parallel, providing we can apply our configuration in sequence which should be possible if we set the dependencies correctly for our extension resources (DSC and customScriptExtension).

Configuration for VMs can be mostly in parallel except: The WAP server configuration depends on the ADFS server being fully configured.

Attempt One: Single Template

I spent a long time creating, testing and attempting to debug this as a single template (except for our nested deployment to reconfigure the vNet). Let me spare you the pain by listing the problems:

  • The template is huge: Many hundreds of lines. Apart from being hard to work with, that really slows down the Visual Studio tooling.
  • Right now a single template with lots and lots of resources seems unreliable. I could use an identical template for multiple deployments and I would get random failures deploying different VMs or get a successful deploy with no rhyme or reason to it.
  • Creating VM extension resources with complex dependencies seems to cause deployment failures. At first I used dependencies in the extensions for the VMs outside of the DC to define my deployment order. I realised after some pain that this was much more prone to failure than if I treated the whole VM as a block. I also discovered that placing the markup for the extensions within the resources block of the VM itself improved reliability.
  • A single deployment takes over an hour. That makes debugging individual parts difficult and time-consuming.

Attempt Two: Multiple Nested Deployments

I now have a rock-solid, reliable deployment. I’ve achieved this by moving each VM and it’s linked resources (NIC, Load Balancer, Public IP) into separate templates. I have a master template that calls the ‘children’ with dependencies limited to one or more of the other nested deployments. The storage account and initial vNet deploy are part of the master template.

The upside of this has been manifold: Each template is shorter and simpler with far fewer variables  now each only deploys a single VM. I can also choose to deploy a single ‘child’ if I want to, within an already deployed environment. This allows me to test and debug more quickly and easily.

Problem One: CustomScriptExtension

When I started this journey there was little documentation around and Resource Manager was in preview. I really struggled to get the CustomScriptExtension for VMs working.All I had to work with were examples using PowerShell to add VM extensions and they were just plain wrong for the Resource Template Approach. Leaning on the Linux equivalent and a lot of testing and poking got things sorted and I’ve written up how the extension currently works.

Problem Two: IaaSDiagnostics

Right now, this one isn’t fixed. I am correctly deploying the IaaSDiagnostics extension into the VMs, and it appears to be correctly configured and working properly. However, the VM blades in the Azure Portal are adamant that diagnostics are not configured. This looks like a bug in the Portal and I’m hoping it will be resolved by the team soon.

Configuring the Virtual Machines

That’s about it for talking about the environment as a whole. I’m going to write up some of the individual servers separately as there were multiple hurdles to jump in configuring them. Stay tuned.

An Introduction To Azure Resource Templates

I have spent a good deal of time over the last month or two building an Azure Resource Template to deploy a relatively complicated IaaS environment. In doing so I’ve hit a variety of problems along the way and I though that a number of blog posts were in order to share what I’ve learned. I will write a detailed post on certain specific servers within the environment shortly. This post will describe Azure Resource Template basics, problems I hit and some decisions I made to overcome issues. Further posts will detail my environment and specific solutions to creating my configuration.

Tooling

  • I started this project using Visual Studio 2013 and the Azure 2.5 .Net SDK. I am now using Visual Studio 2015 and the 2.7 SDK. The SDK is the key – the tooling has improved dramatically, although there are still things it doesn’t do that I would like it to (like proper error checking, for a start). You can find the SDKs on the Azure Downloads site.
  • You will also need the latest Azure PowerShell module. It’s important to keep the SDK and PowerShell current. There is a big change coming in the PowerShell soon when the current situation of switching between service management commands and resource management commands will be removed.
  • Debugging templates is extremely hard. It’s impossible without using Azure Resource Manager (https://resources.azure.com). This is a fantastic tool and you absolutely need to use it.

Documentation

  • The Azure Resource Template documentation is growing steadily and should be your first point of reference to see how things are done.
  • The Azure Quickstart Templates are a great source of inspiration and code if you are starting out. You need to be careful though – some of the samples I started with were clunky and a couple plain didn’t work. More importantly, they don’t necessarily reflect changes in the API. Adding resources should always be done through the tooling (more on that in a bit). If you just want to leap straight to the source code, it’s on GitHub.

Getting Started With Your Deployment Project

Creating a new deployment project is pretty straightforward. In the New Project dialog in Visual Studio you will find the Azure Resource Group project type under Cloud within Visual C#.

new resouce project

When you create a new Azure Resource Group project, the tooling helpfully connects to Azure to offer you a bunch of starting templates. If you want something that’s on the list, simply choose it and your template will be created pre-populated. If you want to start clean, as I normally do, choose Blank Template from the bottom of the list.

new project template

The new project contains a small number of files. My advice is to ignore the Deploy-AzureResourceGroup.ps1 script. It contains some useful snippets, but only works if you run it in a very specific way. The ones you care about are the DeploymentTemplate.json and DeploymentTemplate.param.dev.json files.

solution explorer

The DeploymentTemplate.json is (oddly) your template file where you detail your resources and stuff. The .param.dev.json file is a companion parameter file for the template, for when you want to run the deployment (more on that later).

If you open the deployment template you will see the new JSON Outline window appear.

json outline

I’ll come onto the contents of the template in a moment. For now let’s focus on the JSON Outline. It’s your friend for moving around your template, adding and remove resources. To add a new resource, click the little package icon with a plus on it, top left of the window.

new resource dialog

When you click the icon, the tooling talks to Azure to get the latest versions of resources. The tooling here is intelligent. In the screenshot above you can see I’m adding a Virtual Machine. As a resource, this depends on things like a storage account (to hold the hard drive blobs) and a network. If you already have these defined in your template, they will be listed in the dropdowns. If not, or if you don’t want to use them, you can add new resources and the tooling will step you through answering the questions necessary to specify your resources.

The image below shows the JSON outline after I’ve added a new VM, plus the required storage and network resources. You can see that the tooling has added parameters and variables into the template as well.

json outline with stuff

You can build your template using only the tooling if you like. However, if you want to do something complex or clever you’re going to be hacking this around by hand.

A Few Template Fundamentals

There are a few key points that you need to know about templates and the resources they contain:

  • There is a one to one relationship between a template and a deployment. If you look in the Azure Portal at a Resource Group you will see Last Deployment listed in the Essentials panel at the top of the blade.
    resource blade essentials
    Clicking the link will show the deployments themselves. The history of deployments is kept for a resource group and each deployment can be inspected to see what parameters were specified and what was done.
    deployment history
    deployment details
  • A resource in a template can be specified as being dependent on another resource in the same template. I have tried external dependencies – the templates fail. This is important because you have no control of the execution order of a template other than through dependencies. If you don’t specify any, Azure Resource Manager will try to deploy all the resources in parallel. This is actually a good thing – in the old world of Azure PowerShell it was hard to push out multiple resources in parallel. When you upload a template for deployment, Azure Resource Manager will parse it and work out the deployment order based on the dependencies you prescribe. This means that most deployments will be quicker in the new model.
  • Resources in a template must have unique names. You can only have one resource of a given type with a given name. This is important and has implications for how you achieve certain things.
  • You can nest deployments. What does that mean? You can call a template from another template, passing in parameters. This is really useful. It’s important to remember that template-deployment relationship. If you do nest these things, you’ll see multiple deployments in your Resource Group blade – one per template.
  • If a resource already exists then you can reconfigure it through your template. I’ve not tried this on anything other than a virtual network, but templates define the desired configuration and Azure Resource Manager will try to set it, even it that means changing what’s there already. That’s actually really useful. It means that we can use a nested deployment in our template to reconfigure something part way through our overall deployment.
  • Everything is case-sensitive. This one just keeps on biting me, because I’m a crap typist and the tooling isn’t great at telling me I’ve mistyped something. There’s no IntelliSense in templates yet.

Deploying Your Template to Azure

Right now, deploying your template means using PowerShell to execute the deployment. The New-AzureResourceGroup cmdlet will create a new Resource Group in your subscription. You tell it the name and location of the resource group, the deployment template you want to use, and the values for the template parameters. That last bit can be done in three different ways – take your pick:

  • Using the –TemplateParameterFile switch allows you to specify a JSON-format parameters file that provides the required values.
  • PowerShell allows you to specify the parameters as options on the command. For example, if I have a parameter of AdminUsername in my template I can add the –AdminUsername switch to the command and set my value.
  • You can create an array of the parameters and their values and pass it into the command. Go read up on PowerShell splatting to find out more about this.

Being old, my preference is to use the second option – it means I don’t need to keep updating a parameters file and I can read the command I’m executing more easily. PowerShell ninjas would doubtless prefer choice number three!

The code below shows how I deploy my resource template:

$ResourceGroupName = "tuservtesting1"
$ResourceGroupLocation = "North Europe"
$TemplateFile = "$pwd\DeploymentTemplate.json"
$envPrefix = "myenv"
$adminUsername = "env-admin"
$adminPassword = "MyPassword"
$adminPassword = ConvertTo-SecureString $adminPassword -AsPlainText -Force
$resourceLocation = "North Europe"
$storageAccountType = "Standard_LRS"
$artifactsLocation = "http://mystorage.blob.core.windows.net/templates"


# create a new resource group and deploy our template to it, with our params
New-AzureResourceGroup -Name $ResourceGroupName `
                       -Location $ResourceGroupLocation `
                       -TemplateFile $TemplateFile `
                       -storageAccountType $storageAccountType `
                       -resourceLocation $resourceLocation `
                       -adminUsername $adminUsername `
                       -adminPassword $adminPassword `
                       -envPrefix $envPrefix `
                       -_artifactsLocation $ArtifactsLocation `
                       -_artifactsLocationSasToken $ArtifactsLocationSasToken `
                       -Force -Verbose

I like this approach because I can create scripts that can be used by our TFS Build and Release Management systems that can automatically deploy my environments.

Stuff I’ve Found Out The Hard Way

The environment I’m deploying is complex. If has multiple virtual machines on a shared network. Some of those machines have public IP addresses; most don’t. I need a domain controller, ADFs server and Web Application Proxy (WAP) server and each of those depends on the other, and I need to get files between them. My original template was many hundreds of lines, nearly a hundred variables and half a dozen parameters. It tool over an hour to deploy (if it did) and testing was a nightmare. As a result, I’ve refined my approach to improve readability, testability and deployability:

  • Virtual machine extension resources seem to deploy more reliably if they are within the Virtual Machine markup. No, I don’t know why. You can specify VM extensions at the same level in the template as your Virtual Machines themselves. However, you can choose to declare them in the resources section of the VM itself. My experience is that the latter reliably deploys the VM and extensions. Before I did this I would get random deployment failures of the extensions.
  • Moving the VMs into nested deployments helps readability, testability and reliability. Again, I don’t know why, but my experience is that very large templates suffer random deployment failures. Pulling each VM and it’s linked resources has completely eliminated random failures. I now have a ‘master template’ which creates the core resources (storage account and virtual network in my case) and then nested templates for each VM that contain the VM, the NIC, the VM extensions and, if exposed to the outside world, load balancer and public IP.
    There are pros and cons to this approach. Reliability is a huge pro, with readability a close second – there are far fewer resources and variables to parse. I can also work on a single VM at once, removing the VM from the resource group and re-running the deployment for just that machines – that’s saved me so much time! On the con side, I can’t make resources in one nested deployment depend on those in another. That means I end up deploying my VMs much more in sequence than I necessarily would have otherwise because I can only have one nested deployment depend on another. I can’t get clever and deploy the VMs in parallel but have individual extensions depend on each other to ensure my configuration works. The other con is I have many more files to upload to Azure storage so the deployment can access them – the PowerShell won’t bundle up all the files that are part of a deployment and push them up as a package.
  • Even if you find something useful in a quickstart template, add the resources cleanly through the tooling and then modify. The API moves forwards and a good chunk of the code in the templates is out of date.
  • The JSON tooling doesn’t do much error checking Copy and paste is your friend to make sure things like variable names match.
  • The only way to test this stuff is to deploy it. When the template is uploaded to Azure, Resource Manager parses it for validity before executing the deployment. That’s the only reliable way to check the validity of the template.
  • The only way to see what’s happening with any detail is to use Azure Resource Explorer. With a VM, for example, you can see an InstanceView that shows the current output from the deployment and extensions. I’ll talk more about this when I start documenting each of the VMs in my environment and how I got them working.

Using the customScriptExtension in Azure Resource Templates

Documentation for using the customScriptExtension for Virtual Machines in Azure through Resource Templates is pretty much non-existent at time of writing, and the articles on using it through PowerShell are just plain wrong when it comes to templates. This post is accurate at time of writing and will show you how to deploy PowerShell scripts and resources to an Azure Virtual Machine through a Resource Template.

The code snippet below shows a customScriptExtension pulled from one of my templates.

        {
          "type": "Microsoft.Compute/virtualMachines/extensions",
          "name": "[concat(variables('vmADFSName'),'/adfsScript')]",
          "apiVersion": "2015-05-01-preview",
          "location": "[parameters('resourceLocation')]",
          "dependsOn": [
            "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]",
            "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/ADFSserver')]"
          ],
          "properties": {
            "publisher": "Microsoft.Compute",
            "type": "CustomScriptExtension",
            "typeHandlerVersion": "1.4",
            "settings": {
              "fileUris": [
                "[concat(parameters('_artifactsLocation'),'/AdfsServer.ps1', parameters('_artifactsLocationSasToken'))]",
                "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
                "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
              ],
              "commandToExecute": "[concat('powershell.exe -file AdfsServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'))]"
            }
          }
        }
      ]
    }

The most important part is the commandToExecute. The documentation tells you to simply list the PowerShell script (something.ps1) you want to run. This won’t work at all! All the extension does is shell whatever you put in commandToExecute. The default association for .ps1 is notepad. All that will do is run up an instance of our favourite text editor as the system account, so you can’t see it.

The solution is to build a command line for powershell.exe, as you can see in my example. I am launching powershell.exe and telling it to load the AdfsServer.ps1 script file. I then specify parameters for the script within the command line. There is no option to pass parameters in through the extension itself.

The fileUris settings list the resources I want to push into the VM. This must include the script you want to run, along with any other supporting files/modules etc. The markup in my example loads the files from an Azure storage account. I specify the base url in the _artifactsLocation parameter and pass in a SaS token for the storage in the parameter _artifactsLocationSasToken. You could just put a url to a world-readable location in there are drop the access token param.

The dependsOn setting allows us to tell the extension to wait until other items in the resource template have been deployed. In this case I push two other extensions into the VM first.

Be aware that the process executed by the extension runs as the system account. I found very quickly that if I wanted to do anything useful with my PowerShell, I needed to use invoke-command within my script. To do that I need the admin credentials, which you can see pass into the command line as parameters.

Want to know what’s going on? Come to the Black Marble Tech Update

It’s January, which can only mean one thing. It’s time for the annual Black Marble Tech Update. If you’re an IT or development manager, come along to hear the stuff you need to know about Microsoft’s releases and updates last year and what we know so far about what is coming this year.

Tech Updates are hard work to prepare for, but they’re quite exhilarating to present. In the morning, myself, Andy and Andrew will run through the key moves and changes in the Microsoft ecosystem to help IT managers with their strategic planning: What’s coming out of support, what’s got a new release due; in short, what do you need to pay attention to for your organisation’s IT.

In the afternoon Robert, Richard and Steve will be covering what’s important to know in the Microsoft developer landscape and this year they are joined by Martin Beeby from the Microsoft DX team.

You can find the Tech Update on our website as two events (IT Managers in the morning, Developers in the afternoon). Feel free to register for the one that most interests you. Most people stay for the whole day and leave with their heads buzzing.