Successful Software Delivery with DevOps

With DevOps best practices and Microsoft’s DevOps tooling, Black Marble can deliver agile planning, source code control, package management, build, testing and release automation to continuously integrate, test, deliver and monitor your application.

It is crucial to not only have the right people in place for your cloud adoption journey, but also to use the right processes and the right tools. A typical DevOps approach consists of cross-functional teams provisioning their own infrastructure, with high degrees of automation using templates, codified rules for security controls and cloud-native architecture.

This is where the core aspects of continuous value delivery meet the demands currently driving companies; an integrated team approach including enterprise agile and cloud computing.

Successful Software Delivery with DevOps
Successful Software Delivery with DevOps

Delivering an Enterprise Cloud Operating Model

There have been some major paradigm shifts in the history of computing with some of the most notable being marked, not only by changes in technology, but by changes in staffing that technology. When the computing standard for mainframe shifted to client/server, the staff model moved from computer operator to system administrator.

The same is true with a move to the cloud.

The cloud fundamentally changes how businesses procure and use technology resources. Traditionally, having had ownership and responsibility of all aspects of technology from infrastructure to software with the cloud, this allows businesses to provision and to consume resources only as needed. Moving to the cloud can bring increased business, agility, and significant costs benefits.

However, the journey to the cloud needs to be managed carefully at each stage; not just for delivery but for expectations and ROI. Even more significantly, the cloud opens up access to a range of on-demand cloud services, unavailable just 10 years previously. These include hyper-scaling, AI services and computing power; short-term consumption providing significant benefits.

All these services combined, provide business realisations that only the cloud can offer.

Transforming your business into a cloud-business is more than simply moving your systems and infrastructure into the cloud – your organisation needs a Cloud Operating Model (COM) to adopt a cloud-first mentality. It is important to guide your people away from traditional IT thinking, to ensure they realise business benefits and harness the true potential of the cloud, where adoption drives innovation. This white paper will cover how this can be achieved with the assistance of Black Marble.

For more information on Delivering an Enterprise Cloud Operating Model, get in touch for a copy of the white paper I put together with our CCO, Rik Hepworth.

Cover of Delivering an Enterprise Cloud Operating Model White Paper
Delivering an Enterprise Cloud Operating Model White Paper, 2nd Edition.

Postmortem published by the Microsoft VSTS Team on last week’s Azure outage

The Azure DevOps (VSTS) team have published the promised postmortem on the outage on the 4th of September.

It gives good detail on what actually happened to the South Central Azure Datacenter and how it effected VSTS (as it was then called).

More interestingly it provides a discussion of mitigations they plan to put in place to stop a single datacentre failure having such a serious effect in the future.

Great openness as always from the team

Versioning your ARM templates within a VSTS CI/CD pipeline

Updated 3 Feb 2018Also see Versioning your ARM templates within a VSTS CI/CD pipeline with Semantic Versioning

Azure Resource Templates (ARM) allow your DevOps infrastructure deployments to be treated as ‘content as code’. So infrastructure definitions can be stored in source control.

As with any code it is really useful to know which version you have out in production. Now a CI/CD process and its usage logs can help here, but just having a version string stored somewhere accessible on the production systems is always useful.

In an ARM Template this can be achieved using the ‘content version’ field in the template (see documentation for more detail on this file). The question becomes how best to update this field with a version number?

The solution I used was a VSTS JSON Versioning Task I had already created to update the template’s .JSON definition file. I popped this task at the start of my ARM templates CI build process and it set the value prior to the storage of the template as a build artifact used within the CD pipeline


Complex Azure Odyssey Part Four: WAP Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. Part Three talks about deploying my ADFS server and in this final part I will show you how to configure the WAP server that faces the outside world.

The Template

The WAP server is the only one in my environment that faces the internet. Because of this the deployment is more complex. I’ve also added further complexity because I want to be able to have more than one WAP server in future, so there’s a load balancer deployed too. You can see the resource outline in the screenshot below:

wap template json

The internet-facing stuff means we need more things in our template. First up is our PublicIPAddress:

{   "name": "[variables('vmWAPpublicipName')]",   "type": "Microsoft.Network/publicIPAddresses",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [ ],   "tags": {     "displayName": "vmWAPpublicip"   },   "properties": {     "publicIPAllocationMethod": "Dynamic",     "dnsSettings": {       "domainNameLabel": "[variables('vmWAPpublicipDnsName')]"     }   } },

This is pretty straightforward stuff. The nature of my environment means that I am perfectly happy with a dynamic IP that changes if I stop and then start the environment. Access will be via the hostname assigned to that IP and I use that hostname in my ADFS service configuration and certificates. Azure builds the hostname based on a pattern and I can use that pattern in my templates, which is how I’ve created the certs when I deploy the DC and configure the ADFS service all before I’ve deployed the WAP server.

That public IP address is then bound to our load balancer which provides the internet-endpoint for our services:

{   "apiVersion": "2015-05-01-preview",   "name": "[variables('vmWAPlbName')]",   "type": "Microsoft.Network/loadBalancers",   "location": "[parameters('resourceLocation')]",   "dependsOn": [     "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"   ],   "properties": {     "frontendIPConfigurations": [       {         "name": "[variables('LBFE')]",         "properties": {           "publicIPAddress": {             "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"           }         }       }     ],     "backendAddressPools": [       {         "name": "[variables('LBBE')]"       }     ],     "inboundNatRules": [       {         "name": "[variables('RDPNAT')]",         "properties": {           "frontendIPConfiguration": {             "id": "[variables('vmWAPLbfeConfigID')]"           },           "protocol": "tcp",           "frontendPort": "[variables('rdpPort')]",           "backendPort": 3389,           "enableFloatingIP": false         }       },       {         "name": "[variables('httpsNAT')]",         "properties": {           "frontendIPConfiguration": {             "id": "[variables('vmWAPLbfeConfigID')]"           },           "protocol": "tcp",           "frontendPort": "[variables('httpsPort')]",           "backendPort": 443,           "enableFloatingIP": false         }       }     ]   } }

There’s a lot going on in here so let’s work through it. First of all we connect our public IP address to the load balancer. We then create a back end configuration which we will later connect our VM to. Finally we create a set of NAT rules. I need to be able to RDP into the WAP server, which is the first block. The variables define the names of my resources. You can see that I specify the ports – external through a variable that I can change, and internal directlym because I need that to be the same each time because that’s what my VMs listen on. You can see that each NAT rule is associated with the frontendIPConfiguration – opening the port to the outside world.

The next step is to create a NIC that will hook our VM up to the existing virtual network and the load balancer:

{   "name": "[variables('vmWAPNicName')]",   "type": "Microsoft.Network/networkInterfaces",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [     "[concat('Microsoft.Network/publicIPAddresses/', variables('vmWAPpublicipName'))]",     "[concat('Microsoft.Network/loadBalancers/',variables('vmWAPlbName'))]"   ],   "tags": {     "displayName": "vmWAPNic"   },   "properties": {     "ipConfigurations": [       {         "name": "ipconfig1",         "properties": {           "privateIPAllocationMethod": "Static",           "privateIPAddress": "[variables('vmWAPIPAddress')]",           "subnet": {             "id": "[variables('vmWAPSubnetRef')]"           },           "loadBalancerBackendAddressPools": [             {               "id": "[variables('vmWAPBEAddressPoolID')]"             }           ],           "loadBalancerInboundNatRules": [             {               "id": "[variables('vmWAPRDPNATRuleID')]"             },             {               "id": "[variables('vmWAPhttpsNATRuleID')]"             }           ]          }       }     ]   } }

Here you can see that the NIC is connected to a subnet on our virtual network with a static IP that I specify in a variable. It is then added to the load balancer back end address pool and finally I need to specify which of the NAT rules I created in the load balancer are hooked up to my VM. If I don’t include the binding here, traffic won’t be passed to my VM (as I discovered when developing this lot – I forgot to wire up https and as a result couldn’t access the website published by WAP!).

The VM itself is basically the same as my ADFS server. I use the same Windows Sever 2012 R2 image, have a single disk and I’ve nested the extensions within the VM because that seems to work better than not doing:

{   "name": "[variables('vmWAPName')]",   "type": "Microsoft.Compute/virtualMachines",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [     "[concat('Microsoft.Network/networkInterfaces/', variables('vmWAPNicName'))]",   ],   "tags": {     "displayName": "vmWAP"   },   "properties": {     "hardwareProfile": {       "vmSize": "[variables('vmWAPVmSize')]"     },     "osProfile": {       "computername": "[variables('vmWAPName')]",       "adminUsername": "[parameters('adminUsername')]",       "adminPassword": "[parameters('adminPassword')]"     },     "storageProfile": {       "imageReference": {         "publisher": "[variables('windowsImagePublisher')]",         "offer": "[variables('windowsImageOffer')]",         "sku": "[variables('windowsImageSKU')]",         "version": "latest"       },       "osDisk": {         "name": "[concat(variables('vmWAPName'), '-os-disk')]",         "vhd": {           "uri": "[concat('http://', variables('storageAccountName'), '', variables('vmStorageAccountContainerName'), '/', variables('vmWAPName'), 'os.vhd')]"         },         "caching": "ReadWrite",         "createOption": "FromImage"       }     },     "networkProfile": {       "networkInterfaces": [         {           "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmWAPNicName'))]"         }       ]     }   },   "resources": [     {       "type": "extensions",       "name": "IaaSDiagnostics",       "apiVersion": "2015-06-15",       "location": "[parameters('resourceLocation')]",       "dependsOn": [         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]"       ],       "tags": {         "displayName": "[concat(variables('vmWAPName'),'/vmDiagnostics')]"       },       "properties": {         "publisher": "Microsoft.Azure.Diagnostics",         "type": "IaaSDiagnostics",         "typeHandlerVersion": "1.4",         "autoUpgradeMinorVersion": "true",         "settings": {           "xmlCfg": "[base64(variables('wadcfgx'))]",           "StorageAccount": "[variables('storageAccountName')]"         },         "protectedSettings": {           "storageAccountName": "[variables('storageAccountName')]",           "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",           "storageAccountEndPoint": ""         }       }     },     {       "type": "Microsoft.Compute/virtualMachines/extensions",       "name": "[concat(variables('vmWAPName'),'/WAPserver')]",       "apiVersion": "2015-05-01-preview",       "location": "[parameters('resourceLocation')]",       "dependsOn": [         "[resourceId('Microsoft.Compute/virtualMachines', variables('vmWAPName'))]",         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/IaaSDiagnostics')]"       ],       "properties": {         "publisher": "Microsoft.Powershell",         "type": "DSC",         "typeHandlerVersion": "1.7",         "settings": {           "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",           "configurationFunction": "[variables('vmWAPConfigurationFunction')]",           "properties": {             "domainName": "[variables('domainName')]",             "adminCreds": {               "userName": "[parameters('adminUsername')]",               "password": "PrivateSettingsRef:adminPassword"             }           }         },         "protectedSettings": {           "items": {             "adminPassword": "[parameters('adminPassword')]"           }         }       }     },     {       "type": "Microsoft.Compute/virtualMachines/extensions",       "name": "[concat(variables('vmWAPName'),'/wapScript')]",       "apiVersion": "2015-05-01-preview",       "location": "[parameters('resourceLocation')]",       "dependsOn": [         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]",         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/WAPserver')]"        ],       "properties": {         "publisher": "Microsoft.Compute",         "type": "CustomScriptExtension",         "typeHandlerVersion": "1.4",         "settings": {           "fileUris": [             "[concat(parameters('_artifactsLocation'),'/WapServer.ps1', parameters('_artifactsLocationSasToken'))]",             "[concat(parameters('_artifactsLocation'),'/', parameters('_artifactsLocationSasToken'))]",             "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"           ],           "commandToExecute": "[concat('powershell.exe -file WAPServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -adfsServerName ',variables('vmADFSName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation "', parameters('resourceLocation'),'"')]"         }       }     }   ] }

The DSC and custom script extension are in the same vein as with ADFS. I can get the features on with DSC and then I need to configure stuff with my script.

The DSC Modules

As with the other two servers, the files copied into the VM by the DSC extension are common. I then call the appropriate configuration for the WAP server, held within my common configuration file. The WAP server configuration is shown below:

configuration WAPserver {     param     (         [Parameter(Mandatory)]         [String]$DomainName,          [Parameter(Mandatory)]         [System.Management.Automation.PSCredential]$Admincreds,          [Int]$RetryCount=20,         [Int]$RetryIntervalSec=30     )      Import-DscResource -ModuleName xComputerManagement,xActiveDirectory          Node localhost     {         WindowsFeature WAPInstall          {              Ensure = "Present"              Name = "Web-Application-Proxy"         }           WindowsFeature WAPMgmt          {              Ensure = "Present"              Name = "RSAT-RemoteAccess"         }           WindowsFeature ADPS         {             Name = "RSAT-AD-PowerShell"             Ensure = "Present"         }          xWaitForADDomain DscForestWait          {              DomainName = $DomainName              DomainUserCredential= $Admincreds             RetryCount = $RetryCount              RetryIntervalSec = $RetryIntervalSec              DependsOn = "[WindowsFeature]ADPS"               }         xComputer DomainJoin         {             Name = $env:COMPUTERNAME             DomainName = $DomainName             Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}$($Admincreds.UserName)", $Admincreds.Password)             DependsOn = "[xWaitForADDomain]DscForestWait"         }          LocalConfigurationManager          {             DebugMode = $true                 RebootNodeIfNeeded = $true         }     }      }

As with ADFS, the configuration joins the domain and adds the required features for WAP. Note that I install the RSAT tools for Remote Access. If you don’t do this, you can’t configure WAP because the powershell modules aren’t installed!

The Custom Scripts

The WAP script performs much of the same work as the ADFS script. I need to install the certificate for my service, so that’s copied onto the server by the script before it runs an invoke-command block. The main script is run as the local system account and can successfully connect to the DC as the computer account. I then run my invoke-command with domain admin credentials so I can configure WAP, and once inside the invoke-command block network access gets tricky, so I don’t do it!

# # WapServer.ps1 # param (     $vmAdminUsername,     $vmAdminPassword,     $fsServiceName,     $adfsServerName,     $vmDCname,     $resourceLocation )  $password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force $credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN$vmAdminUsername", $password)  Write-Verbose -Verbose "Entering Domain Controller Script" Write-Verbose -verbose "Script path: $PSScriptRoot" Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername" Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword" Write-Verbose -Verbose "fsServiceName: $fsServiceName" Write-Verbose -Verbose "adfsServerName: $adfsServerName" Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN" Write-Verbose -Verbose "resourceLocation: $resourceLocation" Write-Verbose -Verbose "==================================="       # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("WAPserver Script Executed", $info_event, 5001)       $srcPath = "\"+ $vmDCname + "src"     $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+""     $fsCertFileName = $fsCertificateSubject+".pfx"     $certPath = $srcPath + "" + $fsCertFileName      #Copy cert from DC     write-verbose -Verbose "Copying $certpath to $PSScriptRoot" #        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}" #        Write-Verbose -Verbose $powershellCommand #        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand) #        $encodedCommand = [Convert]::ToBase64String($bytes)  #        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"         copy-item $certPath -Destination $PSScriptRoot -Verbose  Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {      param (         $workingDir,         $vmAdminPassword,         $domainCredential,         $adfsServerName,         $fsServiceName,         $vmDCname,         $resourceLocation     )     # Working variables      # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("In WAPserver scriptblock", $info_event, 5001)      #go to our packages scripts folder     Set-Location $workingDir          $zipfile = $workingDir + ""     $destination = $workingDir     [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null     [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)          Import-Module .tuServDeployFunctions.ps1      $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+""     $fsCertFileName = $workingDir + "" + $fsCertificateSubject+".pfx"      Write-Verbose -Verbose "Importing sslcert $fsCertFileName"     Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword      $fsIpAddress = (Resolve-DnsName $adfsServerName -type a).ipaddress     Add-HostsFileEntry -ip $fsIpAddress -domain $fsCertificateSubject       Set-WapConfiguration -credential $domainCredential -fedServiceName $fsCertificateSubject -certificateSubject $fsCertificateSubject   } -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $adfsServerName, $fsServiceName, $vmDCname, $resourceLocation

The script modifies the HOSTS file on the server so it can find the ADFS service and then configures the Web Application Proxy for that ADFS service. It’s worth mentioning at this point the $fsCertificateSubject, which is also my service name. When we first worked on this environment using the old Azure PowerShell commands the name of the public endpoint was always <something> When I use the new Resource Manager model I discovered that is now <something>.<Azure Location> The <something> is in our control – we specify it. The <Azure Location> isn’t quite, and is the resource location for our deployment (converted to lowercase with no spaces). You’ll find that same line of code in the DC and ADFS scripts and it’s creating the hostname our service will use based on the resource location specified in the template, passed into the script as a parameter.

The functions called by that script are shown below:

function Import-SSLCertificate {     [CmdletBinding()]     param     (         $certificateFileName,         $certificatePassword     )              Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"         Write-Verbose -Verbose "---"          Import-Module .PSPKIpspki.psm1          Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName         # import it         $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force         Import-PfxCertificate –FilePath ($certificateFileName) cert:localMachinemy -Password $password  }  function Add-HostsFileEntry { [CmdletBinding()]     param     (         $ip,         $domain     )      $hostsFile = "$env:windirSystem32driversetchosts"     $newHostEntry = "`t$ip`t$domain";          if((gc $hostsFile) -contains $NewHostEntry)         {             Write-Verbose -Verbose "The hosts file already contains the entry: $newHostEntry.  File not updated.";         }         else         {             Add-Content -Path $hostsFile -Value $NewHostEntry;         } }  function Set-WapConfiguration { [CmdletBinding()] Param( $credential, $fedServiceName, $certificateSubject )  Write-Verbose -Verbose "Configuring WAP Role" Write-Verbose -Verbose "---"      #$certificate = (dir Cert:LocalMachineMy | where {$_.subject -match $certificateSubject}).thumbprint     $certificateThumbprint = (get-childitem Cert:LocalMachineMy | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint      # install WAP     Install-WebApplicationProxy –CertificateThumbprint $certificateThumbprint -FederationServiceName $fedServiceName -FederationServiceTrustCredential $credential  }

What’s Left?

This sequence of posts has talked about Resource Templates and how I structure mine based on my experience of developing and repeatedly deploying a pretty complex environment. It’s also given you specific config advice for doing the same as me: Create a Domain Controller and Certificate Authority, create an ADFS server and publish that server via a Web Application Proxy. If you only copy the stuff so far you’ll have an isolated environment that you can access via the WAP server for remote management.

I’m still working on this, however. I have a SQL server to configure. It turns out that DSC modules for SQL are pretty rich and I’ll blog on those at some point. I am also adding a BizTalk server. I suspect that will involve more on the custom script side. I then need to deploy my application itself, which I haven’t even begun yet (although the guys have created a rich set of automation PowerShell scripts to deal with the deployment).

Overall, I hope you take away from this series of posts just how powerful Azure Resource Templates can bee when pushing out IaaS solutions. I haven’t even touched on the PaaS components of Azure, but they can be dealt with in the same way. The need to learn this stuff is common across IT, Dev and DevOps, and it’s really interesting and fun to work on (if frustrating at times). I strongly encourage you to go play!


As with the previous posts, stuff I’ve talked about has been derived in part from existing resources:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Two: Domain Controller

In part one of this series of posts I talked about the project driving my creation of these Azure Resource Templates, the structure of the template and what resource I was deploying. This post will go through the deployment and configuration of the first VM which will become my domain controller and certificate server. In order to achieve my goals I need to deploy the VM, the DSC extension and finally the custom script extension to perform actions that current DSC modules can’t. I’ll show you the template code, the DSC code and the final scripts and talk about the gotchas I encountered on the way.

Further posts will detail the ADFS and WAP server deployments.

The Template

I’ve already talked about how I’ve structured this project: A core template calls a collection of nested templates – one per VM. The DC template differs from the rest in that it too calls a nested deployment to make changes to my virtual network. Other than that, it follows the same convention.

dc template json view_thumb[2]

The screenshot above is the JSON outline view of the template. Each of my nested VM templates follows the same pattern: The parameters block in each template is exactly the same. I’m using a standard convention for naming all my resources, so providing I pass the envPrefix parameter between each one I can calculate the name of any resource in the project. That’s important, as we’ll see in a moment. The variables block contains all the variables that the current template needs – things like the IP address that should be assigned or the image we use as our base for the VM. Finally, the resources section holds the items we are deploying to create the domain controller. This VM is isolated from the outside world so we need the VM itself and a NIC to connect it to our virtual network, nothing more. The network is created by the core template before it calls the DC template.

The nested deployment needs explaining. Once we’ve created our domain controller we need to make sure that all our other VMs receive the correct IP address for their DNS. In order to do that we have to reconfigure the virtual network that we have already deployed. The nested deployment here is an artefact of the original approach with a single template – it could actually be fully contained in the DC template.

To explain: We can only define a resource with a given type and name in a template once. Templates are declarative and describe how we want a resource to be configured. With our virtual network we want to reconfigure it after we have deployed subsequent resources. If we describe the network for a second time, the new configuration is applied to our existing resource. The problem is that we have already got a resource in our template for our network. We get around the problem by calling a nested deployment. That deployment is a copy of the network configuration, with the differences we need for our reconfiguration. In my original template which contained all the resources, that nested deployment depended on the DC being deployed and was then called. It had to be a nested deployment because the network was already in there once.

With my new model I could actually just include the contents of the network reconfiguration deployment directly in the DC template. I am still calling the nested resource simply because of the way I split my original template. The end result is the same. The VM gets created, then the DSC and script extensions run to turn it into a domain controller. The network template is then called to set the DNS IP configuration of the network to be the IP address of the newly-minted DC.

{   "name": "tuServUpdateVnet",   "type": "Microsoft.Resources/deployments",   "apiVersion": "2015-01-01",   "dependsOn": [     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/dcScript')]"   ],   "properties": {     "mode": "Incremental",     "templateLink": {       "uri": "[concat(variables('updateVNetDNSTemplateURL'), parameters('_artifactsLocationSasToken'))]",       "contentVersion": ""     },     "parameters": {       "resourceLocation": { "value": "[parameters('resourceLocation')]" },       "virtualNetworkName": { "value": "[variables('virtualNetworkName')]" },       "virtualNetworkPrefix": { "value": "[variables('virtualNetworkPrefix')]" },       "virtualNetworkSubnet1Name": { "value": "[variables('virtualNetworkSubnet1Name')]" },       "virtualNetworkSubnet1Prefix": { "value": "[variables('virtualNetworkSubnet1Prefix')]" },       "virtualNetworkDNS": { "value": [ "[variables('vmDCIPAddress')]" ] }     }   } }

The code above is contained in my DC template. It calls the nested deployment through a URI to the template. That points to an azure storage container with all the resources for my deployment held in it. The template is called with a set of parameters that are mostly variables created in the DC template in accordance with the rules and patterns I’ve set. Everything is the same as the original network deployment with the exception of the DNS address which is to be set to the DC address. Below is the network template. Note that the parameter block defines parameters that match those being passed in. All names are case sensitive.

{     "$schema": "",     "contentVersion": "",     "parameters": {         "resourceLocation": {             "type": "string",             "defaultValue": "West US",             "allowedValues": [                 "East US",                 "West US",                 "West Europe",                 "North Europe",                 "East Asia",                 "South East Asia"             ],             "metadata": {                 "description": "The region to deploy the storage resources into"             }         },         "virtualNetworkName": {             "type": "string"         },         "virtualNetworkDNS": {             "type": "array"         },         "virtualNetworkPrefix": {             "type": "string"         },         "virtualNetworkSubnet1Name": {             "type": "string"         },         "virtualNetworkSubnet1Prefix": {             "type": "string"         }     },         "variables": {         },         "resources": [             {                 "name": "[parameters('virtualNetworkName')]",                 "type": "Microsoft.Network/virtualNetworks",                 "location": "[parameters('resourceLocation')]",                 "apiVersion": "2015-05-01-preview",                 "tags": {                     "displayName": "virtualNetworkUpdate"                 },                 "properties": {                     "addressSpace": {                         "addressPrefixes": [                             "[parameters('virtualNetworkPrefix')]"                         ]                     },                     "dhcpOptions": {                         "dnsServers": "[parameters('virtualNetworkDNS')]"                     },                      "subnets": [                         {                             "name": "[parameters('virtualNetworkSubnet1Name')]",                             "properties": {                                 "addressPrefix": "[parameters('virtualNetworkSubnet1Prefix')]"                             }                         }                     ]                 }             }         ],         "outputs": {         }     }

The VM itself is pretty straightforward. The code below deploys a virtual NIC and then the VM. The NIC needs to be created first and is then bound to the VM when the latter is deployed. The snippet has the nested resources for the VM extensions removed. I’ll show you those in a bit.

{   "apiVersion": "2015-05-01-preview",   "dependsOn": [   ],   "location": "[parameters('resourceLocation')]",   "name": "[variables('vmDCNicName')]",   "properties": {     "ipConfigurations": [       {         "name": "ipconfig1",         "properties": {           "privateIPAllocationMethod": "Static",           "privateIPAddress": "[variables('vmDCIPAddress')]",           "subnet": {             "id": "[variables('vmDCSubnetRef')]"           }         }       }     ]   },   "tags": {     "displayName": "vmDCNic"   },   "type": "Microsoft.Network/networkInterfaces" }, {   "name": "[variables('vmDCName')]",   "type": "Microsoft.Compute/virtualMachines",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [     "[concat('Microsoft.Network/networkInterfaces/', variables('vmDCNicName'))]"   ],   "tags": {     "displayName": "vmDC"   },   "properties": {     "hardwareProfile": {       "vmSize": "[variables('vmDCVmSize')]"     },     "osProfile": {       "computername": "[variables('vmDCName')]",       "adminUsername": "[parameters('adminUsername')]",       "adminPassword": "[parameters('adminPassword')]"     },     "storageProfile": {       "imageReference": {         "publisher": "[variables('windowsImagePublisher')]",         "offer": "[variables('windowsImageOffer')]",         "sku": "[variables('windowsImageSKU')]",         "version": "latest"       },       "osDisk": {         "name": "[concat(variables('vmDCName'), '-os-disk')]",         "vhd": {           "uri": "[concat('http://', variables('storageAccountName'), '', variables('vmStorageAccountContainerName'), '/', variables('vmDCName'), 'os.vhd')]"         },         "caching": "ReadWrite",         "createOption": "FromImage"       },       "dataDisks": [         {           "vhd": {             "uri": "[concat('http://', variables('storageAccountName'), '', variables('vmStorageAccountContainerName'),'/', variables('vmDCName'),'data-1.vhd')]"           },           "name": "[concat(variables('vmDCName'),'datadisk1')]",           "createOption": "empty",           "caching": "None",           "diskSizeGB": "[variables('windowsDiskSize')]",           "lun": 0         }       ]     },     "networkProfile": {       "networkInterfaces": [         {           "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmDCNicName'))]"         }       ]     }   },   "resources": [   ] }

The NIC is pretty simple. I tell it the name of the subnet on my network I want it to connect to and I tell it that I want to use a static private IP address, and what that address is. The VM resource then references the NIC in the networkProfile section.

The VM itself is built using the Windows Server 2012 R2 Datacentre image provided by Microsoft. That is specified in the imageReference section. There are lots of VM images and each is reference by publisher (in this case MicrosoftWindowsServer), offer (WindowsServer) and SKU (2012-R2-Datacenter). I’m specifying ‘latest’ as the version but you can be specific if you have built your deployment around a specific version of an image. They are updated regularly to include patches… There are a wide range of images available to save you time. My full deployment makes use of a SQL Server image and I’m also playing with a BizTalk image right now. It’s much easier than trying to sort out the install of products yourself, and the licence cost of the software gets rolled into the VM charge.

We need to add a second disk to our VM to hold the domain databases. The primary disk on a VM has read and write caching enabled. Write caching exposes us to risk of corrupting our domain database in the event of a failure, so I’m adding a second disk and setting the caching on that to none. It’s all standard stuff at this point.

I’m not going to describe the IaaSDiagnostics extension. The markup for that is completely default as provided by the tooling when you add the resource. Let’s move on to the DSC extension.

{   "type": "Microsoft.Compute/virtualMachines/extensions",   "name": "[concat(variables('vmDCName'),'/InstallDomainController')]",   "apiVersion": "2015-05-01-preview",   "location": "[parameters('resourceLocation')]",   "dependsOn": [     "[resourceId('Microsoft.Compute/virtualMachines', variables('vmDCName'))]",     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/IaaSDiagnostics')]"   ],   "properties": {     "publisher": "Microsoft.Powershell",     "type": "DSC",     "typeHandlerVersion": "1.7",     "settings": {       "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",       "configurationFunction": "[variables('vmDCConfigurationFunction')]",       "properties": {         "domainName": "[variables('domainName')]",         "adminCreds": {           "userName": "[parameters('adminUsername')]",           "password": "PrivateSettingsRef:adminPassword"         }       }     },     "protectedSettings": {       "items": {         "adminPassword": "[parameters('adminPassword')]"       }     }   } }

I should mention at this point that I am nesting the extensions within the VM resources section. You don’t need to do this – they can be resources at the same level as the VM. However, my experience from deploying this lot a gazillion times is that if I nest the extensions I get a more robust deployment. Pulling them out of the VM appears to increase the chance of the extension failing to deploy.

The DSC extension will do different things depending on the OS version of Windows you are using. For my 2012 R2 VM it will install the necessary required software to use Desired State Configuration and it will then reboot the VM before applying any config. On the current Server 2016 preview images that installation and reboot isn’t needed as the pre-reqs are already installed.

The DSC extension needs to copy your DSC modules and configuration onto the VM. That’s specified in the modulesURL setting and it expects a zip archive with your stuff in it. I’ll show you that when we look at the DSC config in detail later. The configurationFunction setting specifies the PowerShell file that contains the function and the name of the configuration in that file to use. I have all the DSC configs in one file so I pass in DSCvmConfigs.ps1\DomainController (note the escaped slash).

Finally, we specify the parameters that we want to pass into our PowerShell DSC function. We’re specifying the name of our Domain and the credentials for our admin account.

Once the DSC module has completed I need to do final configuration with standard PowerShell scripts. The customScript Extension is our friend here. Documentation on this is somewhat sparse and I’ve already blogged on the subject to help you. The template code is below:

{   "type": "Microsoft.Compute/virtualMachines/extensions",   "name": "[concat(variables('vmDCName'),'/dcScript')]",   "apiVersion": "2015-05-01-preview",   "location": "[parameters('resourceLocation')]",   "dependsOn": [     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/InstallDomainController')]"   ],   "properties": {     "publisher": "Microsoft.Compute",     "type": "CustomScriptExtension",     "typeHandlerVersion": "1.4",     "settings": {       "fileUris": [         "[concat(parameters('_artifactsLocation'),'/DomainController.ps1', parameters('_artifactsLocationSasToken'))]",         "[concat(parameters('_artifactsLocation'),'/', parameters('_artifactsLocationSasToken'))]",         "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"       ],       "commandToExecute": "[concat('powershell.exe -file DomainController.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -tsServiceName ',variables('vmTWAPpublicipDnsName'), ' -resourceLocation "', parameters('resourceLocation'),'"')]"     }   } }

The module downloads the files I need which in this case is a zip containing the PSPKI PowerShell modules that I reference to perform a bunch of certificate functions, a module of my own functions and finally the DomainController.ps1 script that is executed by the extension. You can’t specify parameters for your script in the extension (and in fact you can’t call the script directly – you have to execute the powershell.exe command yourself) so you can see that I build the commandToExecute from using a bunch of variables and string concatenation.

The DSC Modules

I need to get the DSC modules I use onto the VM. To save my going mad, that means I include the module source in the Visual Studio solution. Over time I’ve evolved a folder structure within the solution to separate templates, DSC files and script files. You can see this structure in the screenshot below.

dsc modules_thumb[2]

I keep all the DSC together like this because I can then simply zip all the files in the DSC folder structure to give me the archive that is deployed by the DSC extension. In the picture you will see that there are a number of .ps1 files in the root. Originally I created separate files for the DSC configuration of each of my VMs. I then collapsed those into the DSCvmConfigs.ps1 files and I simply haven’t removed the others from the project.

My DomainController configuration function began life as the example code from the three server SharePoint template on Github and I have since extended and modified it. The code is shown below:

configuration DomainController  {     param     (          [Parameter(Mandatory)]         [String]$DomainName,          [Parameter(Mandatory)]         [System.Management.Automation.PSCredential]$Admincreds,          [String]$DomainNetbiosName=(Get-NetBIOSName -DomainName $DomainName),         [Int]$RetryCount=20,         [Int]$RetryIntervalSec=30     )           Import-DscResource -ModuleName xComputerManagement, cDisk, xDisk, xNetworking, xActiveDirectory, xSmbShare, xAdcsDeployment     [System.Management.Automation.PSCredential ]$DomainCreds = New-Object System.Management.Automation.PSCredential ("${DomainName}$($Admincreds.UserName)", $Admincreds.Password)     $Interface=Get-NetAdapter|Where Name -Like "Ethernet*"|Select-Object -First 1     $InteraceAlias=$($Interface.Name)      Node localhost     {         WindowsFeature DNS          {              Ensure = "Present"              Name = "DNS"         }         xDnsServerAddress DnsServerAddress          {              Address        = ''              InterfaceAlias = $InteraceAlias             AddressFamily  = 'IPv4'         }         xWaitforDisk Disk2         {              DiskNumber = 2              RetryIntervalSec =$RetryIntervalSec              RetryCount = $RetryCount         }         cDiskNoRestart ADDataDisk         {             DiskNumber = 2             DriveLetter = "F"         }         WindowsFeature ADDSInstall          {              Ensure = "Present"              Name = "AD-Domain-Services"         }           xADDomain FirstDS          {             DomainName = $DomainName             DomainAdministratorCredential = $DomainCreds             SafemodeAdministratorPassword = $DomainCreds             DatabasePath = "F:NTDS"             LogPath = "F:NTDS"             SysvolPath = "F:SYSVOL"         }         WindowsFeature ADCS-Cert-Authority         {                Ensure = 'Present'                Name = 'ADCS-Cert-Authority'                DependsOn = '[xADDomain]FirstDS'         }         WindowsFeature RSAT-ADCS-Mgmt         {                Ensure = 'Present'                Name = 'RSAT-ADCS-Mgmt'                DependsOn = '[xADDomain]FirstDS'         }         File SrcFolder         {             DestinationPath = "C:src"             Type = "Directory"             Ensure = "Present"             DependsOn = "[xADDomain]FirstDS"         }         xSmbShare SrcShare         {             Ensure = "Present"             Name = "src"             Path = "C:src"             FullAccess = @("Domain Admins","Domain Computers")             ReadAccess = "Authenticated Users"             DependsOn = "[File]SrcFolder"         }         xADCSCertificationAuthority ADCS         {             Ensure = 'Present'             Credential = $DomainCreds             CAType = 'EnterpriseRootCA'             DependsOn = '[WindowsFeature]ADCS-Cert-Authority'                       }         WindowsFeature ADCS-Web-Enrollment         {             Ensure = 'Present'             Name = 'ADCS-Web-Enrollment'             DependsOn = '[WindowsFeature]ADCS-Cert-Authority'         }         xADCSWebEnrollment CertSrv         {             Ensure = 'Present'             Name = 'CertSrv'             Credential = $DomainCreds             DependsOn = '[WindowsFeature]ADCS-Web-Enrollment','[xADCSCertificationAuthority]ADCS'         }                    LocalConfigurationManager          {             DebugMode = $true             RebootNodeIfNeeded = $true         }    } }

The .ps1 file contains all the DSC configurations for my environment. The DomainController configuration starts with a list of parameters. These match the ones being passed in by the DSC extension, or have default or calculated values. The import-dscresource command specifies the DSC modules that the configuration needs. I have to ensure that any I am using are included in the zip files downloaded by the extension. I am using modules that configure disks, network shares, active directory domains and certificate services.

The node section then declares my configuration. You can set configurations for multiple hosts in a single DSC configuration block, but I’m only concerned with the host I’m on – localhost. Within the block I then declare what I want the configuration of the host to be. It’s the job of the DSC modules to apply whatever actions are necessary to set the configuration to that which I specify. Just like in our resource template, DSC settings can depend on one another if something needs to be done before something else.

This DSC configuration installs the windows features needed for creating a domain controller. It looks for the additional drive on the VM and assigns it the drive letter F. It creates the new Active Directory domain and places the domain database files on drive F. Once the domain is up and running I create a folder on drive C called src and share that folder. I’m doing that because I create two certificates later and I need to make them available to other machines in the domain. More on that in a bit. Finally, we install the certificate services features and configure a certificate authority. The LocalConfigurationManager settings turn on as much debug output as I can and tell the system that if any of the actions in my config demand a reboot that’s OK – restart as and when required rather than waiting until the end.

I’d love to do all my configuration with DSC but sadly there just aren’t the modules yet. There are some things I just can’t do, like creating a new certificate template in my CA and then generating some specific templates for my ADFS services that are on other VMs. I also can’t set file rights on a folder, although I can set rights on a share. Notice that I grant access to my share to Domain Computers. Both the DSC modules and the custom script extension command are run as the local system account. When I try to read files over the network that means I am connecting to the share as the Computer account and I need to grant access. When I create the DC there are no other VMs in the domain, so I use the Domain Computers group to make sure all my servers will be able to access the files.

Once the DC module completes I have a working domain with a certificate authority.

The Custom Scripts

As with my DSC modules, I keep all the custom scripts for my VMs in one folder within the solution. All of these need to be uploaded to Azure storage so I can access them with the extension and copy them to my VMs. The screenshot below shows the files in the solution. I have a script for each VM that needs one, which is executed by the extension. I then have a file of shared functions and a zip with supporting modules that I need.

custom scripts_thumb[2]

# # DomainController.ps1 # param (     $vmAdminUsername,     $vmAdminPassword,     $fsServiceName,     $tsServiceName,     $resourceLocation )  $password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force $credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN$vmAdminUsername", $password) Write-Verbose -Verbose "Entering Domain Controller Script" Write-Verbose -verbose "Script path: $PSScriptRoot" Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername" Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword" Write-Verbose -Verbose "fsServiceName: $fsServiceName" Write-Verbose -Verbose "tsServiceName: $tsServiceName" Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN" Write-Verbose -Verbose "resourceLocation: $resourceLocation" Write-Verbose -Verbose "==================================="      # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("DomainController Script Executed", $info_event, 5001)   Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {      param (         $workingDir,         $vmAdminPassword,         $fsServiceName,         $tsServiceName,         $resourceLocation     )     # Working variables     $serviceAccountOU = "Service Accounts"     Write-Verbose -Verbose "Entering Domain Controller Script"     Write-Verbose -verbose "workingDir: $workingDir"     Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"     Write-Verbose -Verbose "fsServiceName: $fsServiceName"     Write-Verbose -Verbose "tsServiceName: $tsServiceName"     Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"     Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"     Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"     Write-Verbose -Verbose "resourceLocation: $resourceLocation"     Write-Verbose -Verbose "==================================="       # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("In DomainController scriptblock", $info_event, 5001)      #go to our packages scripts folder     Set-Location $workingDir          $zipfile = $workingDir + ""     $destination = $workingDir     [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null     [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)      Import-Module .tuServDeployFunctions.ps1      #Enable CredSSP in server role for delegated credentials     Enable-WSManCredSSP -Role Server –Force      #Create OU for service accounts, computer group; create service accounts     Add-ADServiceAccounts -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU -password $vmAdminPassword     Add-ADComputerGroup -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU     Add-ADComputerGroupMember -group "tuServ Computers" -member ($env:COMPUTERNAME + '$')      #Create new web server cert template     $certificateTemplate = ($env:USERDOMAIN + "_WebServer")     Generate-NewCertificateTemplate -certificateTemplateName $certificateTemplate -certificateSourceTemplateName "WebServer"     Set-tsCertificateTemplateAcl -certificateTemplate $certificateTemplate -computers "tuServComputers"      # Generate SSL Certificates      $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+""     Generate-SSLCertificate -certificateSubject $fsCertificateSubject -certificateTemplate $certificateTemplate     $tsCertificateSubject = $tsServiceName + ""     Generate-SSLCertificate -certificateSubject $tsCertificateSubject -certificateTemplate $certificateTemplate      # Export Certificates     $fsCertExportFileName = $fsCertificateSubject+".pfx"     $fsCertExportFile = $workingDir+""+$fsCertExportFileName     Export-SSLCertificate -certificateSubject $fsCertificateSubject -certificateExportFile $fsCertExportFile -certificatePassword $vmAdminPassword     $tsCertExportFileName = $tsCertificateSubject+".pfx"     $tsCertExportFile = $workingDir+""+$tsCertExportFileName     Export-SSLCertificate -certificateSubject $tsCertificateSubject -certificateExportFile $tsCertExportFile -certificatePassword $vmAdminPassword      #Set permissions on the src folder     $acl = Get-Acl c:src     $acl.SetAccessRuleProtection($True, $True)     $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain Computers","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")     $acl.AddAccessRule($rule)     $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Authenticated Users","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")     $acl.AddAccessRule($rule)     Set-Acl c:src $acl       #Create src folder to store shared files and copy certs to it     Copy-Item -Path "$workingDir*.pfx" c:src  } -ArgumentList $PSScriptRoot, $vmAdminPassword, $fsServiceName, $tsServiceName, $resourceLocation

The domain controller script is shown above. There are a whole bunch of write-verbose commands that output debug which I can see through the Azure Resource Explorer as the script runs.

Pretty much the first thing I do here is an invoke-command. The script is running as local system and there’s not much I can actually do as that account. My invoke-command block runs as the domain administrator so I can get stuff done. Worth noting is that the invoke-command approach makes accessing network resources tricky. It’s not an issue here but it bit me with the ADFS and WAP servers.

I unzip the PSPKI archive that has been copied onto the server and load the modules therein. The files are downloaded to a folder that is in a structure including the version number of the script extension so I can’t be explicit. Fortunately I can use the $PSScriptRoot variable to workout that location and I pass it into the invoke-command as $workingDir. The PSKPI modules allow me to create a new certificate template on my CA so I can generate new certs with exportable private keys. I need the same certs on more than one of my servers so I need to be able to copy them around. I generate the certs and drop them into the src folder I created with DSC. I also set the rights on that src folder to grant Domain Computers and Authenticated Users access. The latter is probably overdoing it, since the former should do what I need, but I spent a good deal of time being stymied by this so I’m taking a belt and braces approach.

The key functions called by the script above are shown below. Held in my modules file, these are all focused on certificate functions and pretty much all depend on the PSPKI modules.

function Generate-NewCertificateTemplate {     [CmdletBinding()]     # note can only be run on the server with PSPKI eg the ActiveDirectory domain controller     param     (         $certificateTemplateName,         $certificateSourceTemplateName             )      Write-Verbose -Verbose "Generating New Certificate Template"           Import-Module .PSPKIpspki.psm1                  $certificateCnName = "CN="+$certificateTemplateName          $ConfigContext = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext          $ADSI = [ADSI]"LDAP://CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext"           $NewTempl = $ADSI.Create("pKICertificateTemplate", $certificateCnName)          $NewTempl.put("distinguishedName","$certificateCnName,CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext")           $NewTempl.put("flags","66113")         $NewTempl.put("displayName",$certificateTemplateName)         $NewTempl.put("revision","4")         $NewTempl.put("pKIDefaultKeySpec","1")         $NewTempl.SetInfo()          $NewTempl.put("pKIMaxIssuingDepth","0")         $NewTempl.put("pKICriticalExtensions","")         $NewTempl.put("pKIExtendedKeyUsage","")         $NewTempl.put("pKIDefaultCSPs","2,Microsoft DH SChannel Cryptographic Provider, 1,Microsoft RSA SChannel Cryptographic Provider")         $NewTempl.put("msPKI-RA-Signature","0")         $NewTempl.put("msPKI-Enrollment-Flag","0")         $NewTempl.put("msPKI-Private-Key-Flag","16842768")         $NewTempl.put("msPKI-Certificate-Name-Flag","1")         $NewTempl.put("msPKI-Minimal-Key-Size","2048")         $NewTempl.put("msPKI-Template-Schema-Version","2")         $NewTempl.put("msPKI-Template-Minor-Revision","2")         $NewTempl.put("msPKI-Cert-Template-OID","")         $NewTempl.put("msPKI-Certificate-Application-Policy","")         $NewTempl.SetInfo()          $WATempl = $ADSI.psbase.children | where {$_.Name -eq $certificateSourceTemplateName}         $NewTempl.pKIKeyUsage = $WATempl.pKIKeyUsage         $NewTempl.pKIExpirationPeriod = $WATempl.pKIExpirationPeriod         $NewTempl.pKIOverlapPeriod = $WATempl.pKIOverlapPeriod         $NewTempl.SetInfo()                  $certTemplate = Get-CertificateTemplate -Name $certificateTemplateName         Get-CertificationAuthority | Get-CATemplate | Add-CATemplate -Template $certTemplate | Set-CATemplate }  function Set-tsCertificateTemplateAcl {     [CmdletBinding()]     param     (     $certificateTemplate,     $computers     )      Write-Verbose -Verbose "Setting ACL for cert $certificateTemplate to allow $computers"     Write-Verbose -Verbose "---"          Import-Module .PSPKIpspki.psm1                  Write-Verbose -Verbose "Adding group $computers to acl for cert $certificateTemplate"         Get-CertificateTemplate -Name $certificateTemplate | Get-CertificateTemplateAcl | Add-CertificateTemplateAcl -User $computers -AccessType Allow -AccessMask Read, Enroll | Set-CertificateTemplateAcl  }  function Generate-SSLCertificate {     [CmdletBinding()]     param     (     $certificateSubject,     $certificateTemplate     )      Write-Verbose -Verbose "Creating SSL cert using $certificateTemplate for $certificateSubject"     Write-Verbose -Verbose "---"          Import-Module .PSPKIpspki.psm1      Write-Verbose -Verbose "Generating Certificate (Single)"         $certificateSubjectCN = "CN=" + $certificateSubject         # Version #1         $powershellCommand = "& {get-certificate -Template " + $certificateTemplate + " -CertStoreLocation Cert:LocalMachineMy -DnsName " + $certificateSubject + " -SubjectName " + $certificateSubjectCN + " -Url ldap:}"         Write-Verbose -Verbose $powershellCommand         $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)         $encodedCommand = [Convert]::ToBase64String($bytes)          Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand" }  function Export-SSLCertificate {     [CmdletBinding()]     param     (     $certificateSubject,     $certificateExportFile,     $certificatePassword     )      Write-Verbose -Verbose "Exporting cert $certificateSubject to $certificateExportFile with password $certificatePassword"     Write-Verbose -Verbose "---"      Import-Module .PSPKIpspki.psm1      Write-Verbose -Verbose "Exporting Certificate (Single)"              $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force         Get-ChildItem Cert:LocalMachineMy | where {$_.subject -match $certificateSubject -and $_.Subject -ne $_.Issuer} | Export-PfxCertificate -FilePath $certificateExportFile -Password $password  }

Making sure it’s reusable

One of the things I’m trying to do here is create a collection of reusable configurations. I can take my DC virtual machine config and make it the core of any number of deployments in future. Key stuff like domain names and machine names are always parameterised all the way through template, DSC and scripts. When Azure Stack arrives I should be able to use the same configuration on-prem and in Azure itself and we can use the same building blocks for any number of customer projects, even though it was originally built for an internal project.

There’s stuff I need to do here: I need to pull the vNet template directly into the DC template – there’s no need for it to be separate; I could do with trimming back some of the access rights I grant on the folders and shares that are unnecessary; you’ll also notice that I am configuring CredSSP which was part of my original attempt to sort out file access from within the invoke-command blocks and failed miserably.

A quick round of credits

Whilst most of this work has been myself, bashing my head against the desk for a while, it is built upon code created by other people who need to be referenced:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part One: The Environment

Part Two | Part Three | Part Four

Over the past month or two I’ve been creating an Azure Resource Template to deploy and environment which, previously, we’d created old-style PowerShell scripts to deploy. In theory, the Resource Template approach would make the deployment quicker, easier to trigger from tooling like Release Manager and make the code easier to read.

The aim is to deploy a number of servers that will host an application we are developing. This will allow us to easily provision test or demo environments into Azure making as much use of automation as possible. The application itself has a set of system requirements that means I have a good number of tasks to work through:

  1. We need our servers to be domain joined so we can manage security, service accounts etc.
  2. The application uses ADFS for authentication. You don’t just expose ADFS to the internet, so that means we need a Web Application Proxy (WAP) server too.
  3. ADFS, WAP and our application need to use secure connections. We want to be able to deploy lots of these, so things like hostnames and FQDNs for services need to be flexible. That means using our own Certificate Services which we need to deploy.
  4. We need a SQL server for our application’s data. We’ll need some additional drives on this to store data and we need to make sure our service accounts have appropriate access.
  5. Our application is hosted in IIS, so we need a web server as well.
  6. Only servers that host internet-accessible services will get public IP addresses.

We already had scripts to do this the old way. I planned to reuse some of that code, and follow the decisions we made around the environment:

  • All VMs would use standard naming, with an environment-specific prefix. The same prefix would be used for other resources. For example, a prefix of env1 means the storage account is env1storage, the network is env1vnet, the Domain Controller VM is env1dc, etc. The AD domain we created would use the prefix is it’s named (so env1.local).
  • All public IPs would use the Azure-assigned DNS name for our services – no corporate DNS. The prefix would be used in conjunction with role when specifying the name for the cloud service.
  • DSC would be used wherever possible. After that, custom PowerShell scripts would be used. The aim was to configure each machine individually and not use remote PowerShell between servers unless absolutely necessary.

We’d also hit a few problems when creating the old approach, so I hoped to reuse the same solutions:

  • There is very little PowerShell to manage certificate services and certificates. There is an incredibly useful set of modules known as PSPKI which we utilise to create certificate templates and cert requests. This would need to be used in conjunction with our own custom scripts, so it had to be deployed to the VMs somehow.

Azure Resources In The Deployment

Things have actually moved on in terms of the servers I am now deploying (only to get more complex!) but it’s easier to detail the environment as originally planned and successfully deployed.

  • Storage Account. Needed for the hard drives of the multiple virtual machines.
  • Virtual Network. A single subnet for all VMs.
  • Domain Controller
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively. It will add the ADDS and ADCS roles and create the domain.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration. It will create certificate templates and generate certs for services.
  • ADFS Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Add the ADFS role and domain-join the VM.
      • CustomScriptExtension. Will configure ADFS – copying the cert from the DC and creating the federation service.
  • WAP Server
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the WAP service to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Copy the cert from the DC and configure WAP to publish the federation service hosted on the ADFS server.
  • SQL Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual two hard disks to store DBs and logs.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. It turns out the DSC for SQL is pretty good. We can do lots of configuration with it, to the extent of not needing the custom script extension.
  • Web Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.
  • WAP Server 2
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the web server-hosted services to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.

Environment-specific Settings

The nice thing about having a cookie-cutter environment is that there are very few things that will vary between deployments and that means very few parameters in our template. We will need to set the prefix for all our resource names, the location for our resources, and because we want to be flexible we will set the admin username and password.

An Immediate Issue: Network Configuration

Right from the gate we have a problem to solve. When you create a virtual network in Azure, it provides IP addresses to the VMs attached to it. As part of that, the DNS server address is given to the VMs. By default that is an Azure DNS service that allows VMs to resolve external domain names. Our environment will need the servers to be told the IP address of the domain controller as it will provide local DNS services essential to the working of the Active Directory Domain. In our old scripts we simple reconfigured the network to specify the DC’s address after we configured the DC.

In the Resource Template world we can reconfigure the vNet by applying new settings to the resource from our template. However, once we have created the vNet in our template we can’t have another resource with the same name in the same template. The solution is to create another template with our new settings and to call that from our main template as a nested deployment. We can pass the IP address of the DC into that template as a parameter and we can make the nested deployment depend on the DC being deployed, which means it will happen after the DC has been promoted to be the domain controller.

Deployment Order

One of the nicest things about Resource Templates is that when you trigger a deployment, the Azure Resource Manager parses your template and tries to deploy the resources as efficiently as possible. If you need things to deploy in a sequence you need to specify dependencies in your resources, otherwise they will all deploy in parallel.

In this environment, I need to deploy the storage account and virtual network before any of the VMs. They don’t depend on each other however, so can be pushed out first, in parallel.

The DC gets deployed next. I need to fully configure this before any other VMs are created because they need to join our domain, and our network has to be reconfigured to hand out the IP address of the DC.

Once the DC is done, the network gets reconfigured with a nested deployment.

In theory, we should be able to deploy all our other VMs in parallel, providing we can apply our configuration in sequence which should be possible if we set the dependencies correctly for our extension resources (DSC and customScriptExtension).

Configuration for VMs can be mostly in parallel except: The WAP server configuration depends on the ADFS server being fully configured.

Attempt One: Single Template

I spent a long time creating, testing and attempting to debug this as a single template (except for our nested deployment to reconfigure the vNet). Let me spare you the pain by listing the problems:

  • The template is huge: Many hundreds of lines. Apart from being hard to work with, that really slows down the Visual Studio tooling.
  • Right now a single template with lots and lots of resources seems unreliable. I could use an identical template for multiple deployments and I would get random failures deploying different VMs or get a successful deploy with no rhyme or reason to it.
  • Creating VM extension resources with complex dependencies seems to cause deployment failures. At first I used dependencies in the extensions for the VMs outside of the DC to define my deployment order. I realised after some pain that this was much more prone to failure than if I treated the whole VM as a block. I also discovered that placing the markup for the extensions within the resources block of the VM itself improved reliability.
  • A single deployment takes over an hour. That makes debugging individual parts difficult and time-consuming.

Attempt Two: Multiple Nested Deployments

I now have a rock-solid, reliable deployment. I’ve achieved this by moving each VM and it’s linked resources (NIC, Load Balancer, Public IP) into separate templates. I have a master template that calls the ‘children’ with dependencies limited to one or more of the other nested deployments. The storage account and initial vNet deploy are part of the master template.

The upside of this has been manifold: Each template is shorter and simpler with far fewer variables  now each only deploys a single VM. I can also choose to deploy a single ‘child’ if I want to, within an already deployed environment. This allows me to test and debug more quickly and easily.

Problem One: CustomScriptExtension

When I started this journey there was little documentation around and Resource Manager was in preview. I really struggled to get the CustomScriptExtension for VMs working.All I had to work with were examples using PowerShell to add VM extensions and they were just plain wrong for the Resource Template Approach. Leaning on the Linux equivalent and a lot of testing and poking got things sorted and I’ve written up how the extension currently works.

Problem Two: IaaSDiagnostics

Right now, this one isn’t fixed. I am correctly deploying the IaaSDiagnostics extension into the VMs, and it appears to be correctly configured and working properly. However, the VM blades in the Azure Portal are adamant that diagnostics are not configured. This looks like a bug in the Portal and I’m hoping it will be resolved by the team soon.

Configuring the Virtual Machines

That’s about it for talking about the environment as a whole. I’m going to write up some of the individual servers separately as there were multiple hurdles to jump in configuring them. Stay tuned.

Setting Custom Domain for Traffic Manager and Azure Websites

Recently I’ve been looking at using traffic manager to front up websites hosted in Azure Websites. I needed to setup a custom domain name instead of using

In order to use Traffic Manager with an Azure website the website needs to be setup using a Standard Hosting Plan.

Each website you want to be included in the traffic manager routing will need to be added as an endpoint in the traffic manager portal.

Once you have this setup you will need to add the DNS CNAME record for your domain. This needs to be configured at your Domain provider. You set the CNAME to point to

In order for the traffic to be routed to your Azure hosted website(s), each website setup as an endpoint in traffic manager will need to have your mapped domain e.g.  configured. This is done under settings->Custom Domains and SSL in the new portal and under the configure tab –> manage domains (or click the Manage Domains button)

If you don’t add this then you will see this 404 error page whenever you try to navigate to the site through the traffic manager custom domain name:


Azure Websites: Blocking access to the url

I’ve been setting up one of our services as the backend service for Azure API management. Part of this process we have mapped DNS to point to the service. As the service is hosted in Azure Websites there are now two urls that exist which can be used to access the service. I wanted to stop a user from accessing the site using the url and only access it via the mapped domain. This is easy to achieve and can be configured in the web.config file of the service.

In the <system.webServer> section add the following configuration

        <rule name=”Block traffic to the raw azurewebsites url”  patternSyntax=”Wildcard” stopProcessing=”true”>
          <match url=”*” />
            <add input=”{HTTP_HOST}” pattern=”**” />
          <action type=”CustomResponse” statusCode=”403″ statusReason=”Forbidden”
          statusDescription=”Site is not accessible” />

Now if I try and access my site through the url, I get a 403 error, but accessing through the mapped domain is fine.

Azure Media Services Live Media Streaming General Availability

Yesterday Scott Guthrie announced a number of enhancements to Microsoft Azure. One of the enhancements is the General Availability of Azure Media Services Live Media Streaming. This gives us the ability to stream live events on a service that has already been used to deliver big events such as the 2014 Sochi Winter Olympics and the 2014 FIFA World Cup.

I’ve look at this for a couple of our projects and found it relatively fast and easy to set up a live media event even from my laptop using its built in camera. There’s a good blob post that walks you through the process of setting up the Live Streaming service. I used this post and was quickly streaming both audio and video from my laptop.

The main piece of software that you need to install is a Video/Audio Encode the supports Smooth Streaming or RTMP. I used the WireCast encoder as specified in the post. You can try out the encoder for 2 weeks as long as you don’t mind seeing the Wirecast Logo on your video (which is removed if you buy a license). Media services pricing can be found here

The Media Services team have provided a MPEG-DASH player to help you test your live streams.

It appears that once you have created a stream that is is still accessible on demand after the event has completed.Also there is around a 20s delay when you receive the stream on your player.