Complex Azure Odyssey Part Four: WAP Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. Part Three talks about deploying my ADFS server and in this final part I will show you how to configure the WAP server that faces the outside world.

The Template

The WAP server is the only one in my environment that faces the internet. Because of this the deployment is more complex. I’ve also added further complexity because I want to be able to have more than one WAP server in future, so there’s a load balancer deployed too. You can see the resource outline in the screenshot below:

wap template json

The internet-facing stuff means we need more things in our template. First up is our PublicIPAddress:

{   "name": "[variables('vmWAPpublicipName')]",   "type": "Microsoft.Network/publicIPAddresses",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [ ],   "tags": {     "displayName": "vmWAPpublicip"   },   "properties": {     "publicIPAllocationMethod": "Dynamic",     "dnsSettings": {       "domainNameLabel": "[variables('vmWAPpublicipDnsName')]"     }   } },

This is pretty straightforward stuff. The nature of my environment means that I am perfectly happy with a dynamic IP that changes if I stop and then start the environment. Access will be via the hostname assigned to that IP and I use that hostname in my ADFS service configuration and certificates. Azure builds the hostname based on a pattern and I can use that pattern in my templates, which is how I’ve created the certs when I deploy the DC and configure the ADFS service all before I’ve deployed the WAP server.

That public IP address is then bound to our load balancer which provides the internet-endpoint for our services:

{   "apiVersion": "2015-05-01-preview",   "name": "[variables('vmWAPlbName')]",   "type": "Microsoft.Network/loadBalancers",   "location": "[parameters('resourceLocation')]",   "dependsOn": [     "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"   ],   "properties": {     "frontendIPConfigurations": [       {         "name": "[variables('LBFE')]",         "properties": {           "publicIPAddress": {             "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"           }         }       }     ],     "backendAddressPools": [       {         "name": "[variables('LBBE')]"       }     ],     "inboundNatRules": [       {         "name": "[variables('RDPNAT')]",         "properties": {           "frontendIPConfiguration": {             "id": "[variables('vmWAPLbfeConfigID')]"           },           "protocol": "tcp",           "frontendPort": "[variables('rdpPort')]",           "backendPort": 3389,           "enableFloatingIP": false         }       },       {         "name": "[variables('httpsNAT')]",         "properties": {           "frontendIPConfiguration": {             "id": "[variables('vmWAPLbfeConfigID')]"           },           "protocol": "tcp",           "frontendPort": "[variables('httpsPort')]",           "backendPort": 443,           "enableFloatingIP": false         }       }     ]   } }

There’s a lot going on in here so let’s work through it. First of all we connect our public IP address to the load balancer. We then create a back end configuration which we will later connect our VM to. Finally we create a set of NAT rules. I need to be able to RDP into the WAP server, which is the first block. The variables define the names of my resources. You can see that I specify the ports – external through a variable that I can change, and internal directlym because I need that to be the same each time because that’s what my VMs listen on. You can see that each NAT rule is associated with the frontendIPConfiguration – opening the port to the outside world.

The next step is to create a NIC that will hook our VM up to the existing virtual network and the load balancer:

{   "name": "[variables('vmWAPNicName')]",   "type": "Microsoft.Network/networkInterfaces",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [     "[concat('Microsoft.Network/publicIPAddresses/', variables('vmWAPpublicipName'))]",     "[concat('Microsoft.Network/loadBalancers/',variables('vmWAPlbName'))]"   ],   "tags": {     "displayName": "vmWAPNic"   },   "properties": {     "ipConfigurations": [       {         "name": "ipconfig1",         "properties": {           "privateIPAllocationMethod": "Static",           "privateIPAddress": "[variables('vmWAPIPAddress')]",           "subnet": {             "id": "[variables('vmWAPSubnetRef')]"           },           "loadBalancerBackendAddressPools": [             {               "id": "[variables('vmWAPBEAddressPoolID')]"             }           ],           "loadBalancerInboundNatRules": [             {               "id": "[variables('vmWAPRDPNATRuleID')]"             },             {               "id": "[variables('vmWAPhttpsNATRuleID')]"             }           ]          }       }     ]   } }

Here you can see that the NIC is connected to a subnet on our virtual network with a static IP that I specify in a variable. It is then added to the load balancer back end address pool and finally I need to specify which of the NAT rules I created in the load balancer are hooked up to my VM. If I don’t include the binding here, traffic won’t be passed to my VM (as I discovered when developing this lot – I forgot to wire up https and as a result couldn’t access the website published by WAP!).

The VM itself is basically the same as my ADFS server. I use the same Windows Sever 2012 R2 image, have a single disk and I’ve nested the extensions within the VM because that seems to work better than not doing:

{   "name": "[variables('vmWAPName')]",   "type": "Microsoft.Compute/virtualMachines",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [     "[concat('Microsoft.Network/networkInterfaces/', variables('vmWAPNicName'))]",   ],   "tags": {     "displayName": "vmWAP"   },   "properties": {     "hardwareProfile": {       "vmSize": "[variables('vmWAPVmSize')]"     },     "osProfile": {       "computername": "[variables('vmWAPName')]",       "adminUsername": "[parameters('adminUsername')]",       "adminPassword": "[parameters('adminPassword')]"     },     "storageProfile": {       "imageReference": {         "publisher": "[variables('windowsImagePublisher')]",         "offer": "[variables('windowsImageOffer')]",         "sku": "[variables('windowsImageSKU')]",         "version": "latest"       },       "osDisk": {         "name": "[concat(variables('vmWAPName'), '-os-disk')]",         "vhd": {           "uri": "[concat('http://', variables('storageAccountName'), '', variables('vmStorageAccountContainerName'), '/', variables('vmWAPName'), 'os.vhd')]"         },         "caching": "ReadWrite",         "createOption": "FromImage"       }     },     "networkProfile": {       "networkInterfaces": [         {           "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmWAPNicName'))]"         }       ]     }   },   "resources": [     {       "type": "extensions",       "name": "IaaSDiagnostics",       "apiVersion": "2015-06-15",       "location": "[parameters('resourceLocation')]",       "dependsOn": [         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]"       ],       "tags": {         "displayName": "[concat(variables('vmWAPName'),'/vmDiagnostics')]"       },       "properties": {         "publisher": "Microsoft.Azure.Diagnostics",         "type": "IaaSDiagnostics",         "typeHandlerVersion": "1.4",         "autoUpgradeMinorVersion": "true",         "settings": {           "xmlCfg": "[base64(variables('wadcfgx'))]",           "StorageAccount": "[variables('storageAccountName')]"         },         "protectedSettings": {           "storageAccountName": "[variables('storageAccountName')]",           "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",           "storageAccountEndPoint": ""         }       }     },     {       "type": "Microsoft.Compute/virtualMachines/extensions",       "name": "[concat(variables('vmWAPName'),'/WAPserver')]",       "apiVersion": "2015-05-01-preview",       "location": "[parameters('resourceLocation')]",       "dependsOn": [         "[resourceId('Microsoft.Compute/virtualMachines', variables('vmWAPName'))]",         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/IaaSDiagnostics')]"       ],       "properties": {         "publisher": "Microsoft.Powershell",         "type": "DSC",         "typeHandlerVersion": "1.7",         "settings": {           "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",           "configurationFunction": "[variables('vmWAPConfigurationFunction')]",           "properties": {             "domainName": "[variables('domainName')]",             "adminCreds": {               "userName": "[parameters('adminUsername')]",               "password": "PrivateSettingsRef:adminPassword"             }           }         },         "protectedSettings": {           "items": {             "adminPassword": "[parameters('adminPassword')]"           }         }       }     },     {       "type": "Microsoft.Compute/virtualMachines/extensions",       "name": "[concat(variables('vmWAPName'),'/wapScript')]",       "apiVersion": "2015-05-01-preview",       "location": "[parameters('resourceLocation')]",       "dependsOn": [         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]",         "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/WAPserver')]"        ],       "properties": {         "publisher": "Microsoft.Compute",         "type": "CustomScriptExtension",         "typeHandlerVersion": "1.4",         "settings": {           "fileUris": [             "[concat(parameters('_artifactsLocation'),'/WapServer.ps1', parameters('_artifactsLocationSasToken'))]",             "[concat(parameters('_artifactsLocation'),'/', parameters('_artifactsLocationSasToken'))]",             "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"           ],           "commandToExecute": "[concat('powershell.exe -file WAPServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -adfsServerName ',variables('vmADFSName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation "', parameters('resourceLocation'),'"')]"         }       }     }   ] }

The DSC and custom script extension are in the same vein as with ADFS. I can get the features on with DSC and then I need to configure stuff with my script.

The DSC Modules

As with the other two servers, the files copied into the VM by the DSC extension are common. I then call the appropriate configuration for the WAP server, held within my common configuration file. The WAP server configuration is shown below:

configuration WAPserver {     param     (         [Parameter(Mandatory)]         [String]$DomainName,          [Parameter(Mandatory)]         [System.Management.Automation.PSCredential]$Admincreds,          [Int]$RetryCount=20,         [Int]$RetryIntervalSec=30     )      Import-DscResource -ModuleName xComputerManagement,xActiveDirectory          Node localhost     {         WindowsFeature WAPInstall          {              Ensure = "Present"              Name = "Web-Application-Proxy"         }           WindowsFeature WAPMgmt          {              Ensure = "Present"              Name = "RSAT-RemoteAccess"         }           WindowsFeature ADPS         {             Name = "RSAT-AD-PowerShell"             Ensure = "Present"         }          xWaitForADDomain DscForestWait          {              DomainName = $DomainName              DomainUserCredential= $Admincreds             RetryCount = $RetryCount              RetryIntervalSec = $RetryIntervalSec              DependsOn = "[WindowsFeature]ADPS"               }         xComputer DomainJoin         {             Name = $env:COMPUTERNAME             DomainName = $DomainName             Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}$($Admincreds.UserName)", $Admincreds.Password)             DependsOn = "[xWaitForADDomain]DscForestWait"         }          LocalConfigurationManager          {             DebugMode = $true                 RebootNodeIfNeeded = $true         }     }      }

As with ADFS, the configuration joins the domain and adds the required features for WAP. Note that I install the RSAT tools for Remote Access. If you don’t do this, you can’t configure WAP because the powershell modules aren’t installed!

The Custom Scripts

The WAP script performs much of the same work as the ADFS script. I need to install the certificate for my service, so that’s copied onto the server by the script before it runs an invoke-command block. The main script is run as the local system account and can successfully connect to the DC as the computer account. I then run my invoke-command with domain admin credentials so I can configure WAP, and once inside the invoke-command block network access gets tricky, so I don’t do it!

# # WapServer.ps1 # param (     $vmAdminUsername,     $vmAdminPassword,     $fsServiceName,     $adfsServerName,     $vmDCname,     $resourceLocation )  $password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force $credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN$vmAdminUsername", $password)  Write-Verbose -Verbose "Entering Domain Controller Script" Write-Verbose -verbose "Script path: $PSScriptRoot" Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername" Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword" Write-Verbose -Verbose "fsServiceName: $fsServiceName" Write-Verbose -Verbose "adfsServerName: $adfsServerName" Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN" Write-Verbose -Verbose "resourceLocation: $resourceLocation" Write-Verbose -Verbose "==================================="       # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("WAPserver Script Executed", $info_event, 5001)       $srcPath = "\"+ $vmDCname + "src"     $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+""     $fsCertFileName = $fsCertificateSubject+".pfx"     $certPath = $srcPath + "" + $fsCertFileName      #Copy cert from DC     write-verbose -Verbose "Copying $certpath to $PSScriptRoot" #        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}" #        Write-Verbose -Verbose $powershellCommand #        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand) #        $encodedCommand = [Convert]::ToBase64String($bytes)  #        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"         copy-item $certPath -Destination $PSScriptRoot -Verbose  Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {      param (         $workingDir,         $vmAdminPassword,         $domainCredential,         $adfsServerName,         $fsServiceName,         $vmDCname,         $resourceLocation     )     # Working variables      # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("In WAPserver scriptblock", $info_event, 5001)      #go to our packages scripts folder     Set-Location $workingDir          $zipfile = $workingDir + ""     $destination = $workingDir     [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null     [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)          Import-Module .tuServDeployFunctions.ps1      $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+""     $fsCertFileName = $workingDir + "" + $fsCertificateSubject+".pfx"      Write-Verbose -Verbose "Importing sslcert $fsCertFileName"     Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword      $fsIpAddress = (Resolve-DnsName $adfsServerName -type a).ipaddress     Add-HostsFileEntry -ip $fsIpAddress -domain $fsCertificateSubject       Set-WapConfiguration -credential $domainCredential -fedServiceName $fsCertificateSubject -certificateSubject $fsCertificateSubject   } -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $adfsServerName, $fsServiceName, $vmDCname, $resourceLocation

The script modifies the HOSTS file on the server so it can find the ADFS service and then configures the Web Application Proxy for that ADFS service. It’s worth mentioning at this point the $fsCertificateSubject, which is also my service name. When we first worked on this environment using the old Azure PowerShell commands the name of the public endpoint was always <something> When I use the new Resource Manager model I discovered that is now <something>.<Azure Location> The <something> is in our control – we specify it. The <Azure Location> isn’t quite, and is the resource location for our deployment (converted to lowercase with no spaces). You’ll find that same line of code in the DC and ADFS scripts and it’s creating the hostname our service will use based on the resource location specified in the template, passed into the script as a parameter.

The functions called by that script are shown below:

function Import-SSLCertificate {     [CmdletBinding()]     param     (         $certificateFileName,         $certificatePassword     )              Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"         Write-Verbose -Verbose "---"          Import-Module .PSPKIpspki.psm1          Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName         # import it         $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force         Import-PfxCertificate –FilePath ($certificateFileName) cert:localMachinemy -Password $password  }  function Add-HostsFileEntry { [CmdletBinding()]     param     (         $ip,         $domain     )      $hostsFile = "$env:windirSystem32driversetchosts"     $newHostEntry = "`t$ip`t$domain";          if((gc $hostsFile) -contains $NewHostEntry)         {             Write-Verbose -Verbose "The hosts file already contains the entry: $newHostEntry.  File not updated.";         }         else         {             Add-Content -Path $hostsFile -Value $NewHostEntry;         } }  function Set-WapConfiguration { [CmdletBinding()] Param( $credential, $fedServiceName, $certificateSubject )  Write-Verbose -Verbose "Configuring WAP Role" Write-Verbose -Verbose "---"      #$certificate = (dir Cert:LocalMachineMy | where {$_.subject -match $certificateSubject}).thumbprint     $certificateThumbprint = (get-childitem Cert:LocalMachineMy | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint      # install WAP     Install-WebApplicationProxy –CertificateThumbprint $certificateThumbprint -FederationServiceName $fedServiceName -FederationServiceTrustCredential $credential  }

What’s Left?

This sequence of posts has talked about Resource Templates and how I structure mine based on my experience of developing and repeatedly deploying a pretty complex environment. It’s also given you specific config advice for doing the same as me: Create a Domain Controller and Certificate Authority, create an ADFS server and publish that server via a Web Application Proxy. If you only copy the stuff so far you’ll have an isolated environment that you can access via the WAP server for remote management.

I’m still working on this, however. I have a SQL server to configure. It turns out that DSC modules for SQL are pretty rich and I’ll blog on those at some point. I am also adding a BizTalk server. I suspect that will involve more on the custom script side. I then need to deploy my application itself, which I haven’t even begun yet (although the guys have created a rich set of automation PowerShell scripts to deal with the deployment).

Overall, I hope you take away from this series of posts just how powerful Azure Resource Templates can bee when pushing out IaaS solutions. I haven’t even touched on the PaaS components of Azure, but they can be dealt with in the same way. The need to learn this stuff is common across IT, Dev and DevOps, and it’s really interesting and fun to work on (if frustrating at times). I strongly encourage you to go play!


As with the previous posts, stuff I’ve talked about has been derived in part from existing resources:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Two: Domain Controller

In part one of this series of posts I talked about the project driving my creation of these Azure Resource Templates, the structure of the template and what resource I was deploying. This post will go through the deployment and configuration of the first VM which will become my domain controller and certificate server. In order to achieve my goals I need to deploy the VM, the DSC extension and finally the custom script extension to perform actions that current DSC modules can’t. I’ll show you the template code, the DSC code and the final scripts and talk about the gotchas I encountered on the way.

Further posts will detail the ADFS and WAP server deployments.

The Template

I’ve already talked about how I’ve structured this project: A core template calls a collection of nested templates – one per VM. The DC template differs from the rest in that it too calls a nested deployment to make changes to my virtual network. Other than that, it follows the same convention.

dc template json view_thumb[2]

The screenshot above is the JSON outline view of the template. Each of my nested VM templates follows the same pattern: The parameters block in each template is exactly the same. I’m using a standard convention for naming all my resources, so providing I pass the envPrefix parameter between each one I can calculate the name of any resource in the project. That’s important, as we’ll see in a moment. The variables block contains all the variables that the current template needs – things like the IP address that should be assigned or the image we use as our base for the VM. Finally, the resources section holds the items we are deploying to create the domain controller. This VM is isolated from the outside world so we need the VM itself and a NIC to connect it to our virtual network, nothing more. The network is created by the core template before it calls the DC template.

The nested deployment needs explaining. Once we’ve created our domain controller we need to make sure that all our other VMs receive the correct IP address for their DNS. In order to do that we have to reconfigure the virtual network that we have already deployed. The nested deployment here is an artefact of the original approach with a single template – it could actually be fully contained in the DC template.

To explain: We can only define a resource with a given type and name in a template once. Templates are declarative and describe how we want a resource to be configured. With our virtual network we want to reconfigure it after we have deployed subsequent resources. If we describe the network for a second time, the new configuration is applied to our existing resource. The problem is that we have already got a resource in our template for our network. We get around the problem by calling a nested deployment. That deployment is a copy of the network configuration, with the differences we need for our reconfiguration. In my original template which contained all the resources, that nested deployment depended on the DC being deployed and was then called. It had to be a nested deployment because the network was already in there once.

With my new model I could actually just include the contents of the network reconfiguration deployment directly in the DC template. I am still calling the nested resource simply because of the way I split my original template. The end result is the same. The VM gets created, then the DSC and script extensions run to turn it into a domain controller. The network template is then called to set the DNS IP configuration of the network to be the IP address of the newly-minted DC.

{   "name": "tuServUpdateVnet",   "type": "Microsoft.Resources/deployments",   "apiVersion": "2015-01-01",   "dependsOn": [     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/dcScript')]"   ],   "properties": {     "mode": "Incremental",     "templateLink": {       "uri": "[concat(variables('updateVNetDNSTemplateURL'), parameters('_artifactsLocationSasToken'))]",       "contentVersion": ""     },     "parameters": {       "resourceLocation": { "value": "[parameters('resourceLocation')]" },       "virtualNetworkName": { "value": "[variables('virtualNetworkName')]" },       "virtualNetworkPrefix": { "value": "[variables('virtualNetworkPrefix')]" },       "virtualNetworkSubnet1Name": { "value": "[variables('virtualNetworkSubnet1Name')]" },       "virtualNetworkSubnet1Prefix": { "value": "[variables('virtualNetworkSubnet1Prefix')]" },       "virtualNetworkDNS": { "value": [ "[variables('vmDCIPAddress')]" ] }     }   } }

The code above is contained in my DC template. It calls the nested deployment through a URI to the template. That points to an azure storage container with all the resources for my deployment held in it. The template is called with a set of parameters that are mostly variables created in the DC template in accordance with the rules and patterns I’ve set. Everything is the same as the original network deployment with the exception of the DNS address which is to be set to the DC address. Below is the network template. Note that the parameter block defines parameters that match those being passed in. All names are case sensitive.

{     "$schema": "",     "contentVersion": "",     "parameters": {         "resourceLocation": {             "type": "string",             "defaultValue": "West US",             "allowedValues": [                 "East US",                 "West US",                 "West Europe",                 "North Europe",                 "East Asia",                 "South East Asia"             ],             "metadata": {                 "description": "The region to deploy the storage resources into"             }         },         "virtualNetworkName": {             "type": "string"         },         "virtualNetworkDNS": {             "type": "array"         },         "virtualNetworkPrefix": {             "type": "string"         },         "virtualNetworkSubnet1Name": {             "type": "string"         },         "virtualNetworkSubnet1Prefix": {             "type": "string"         }     },         "variables": {         },         "resources": [             {                 "name": "[parameters('virtualNetworkName')]",                 "type": "Microsoft.Network/virtualNetworks",                 "location": "[parameters('resourceLocation')]",                 "apiVersion": "2015-05-01-preview",                 "tags": {                     "displayName": "virtualNetworkUpdate"                 },                 "properties": {                     "addressSpace": {                         "addressPrefixes": [                             "[parameters('virtualNetworkPrefix')]"                         ]                     },                     "dhcpOptions": {                         "dnsServers": "[parameters('virtualNetworkDNS')]"                     },                      "subnets": [                         {                             "name": "[parameters('virtualNetworkSubnet1Name')]",                             "properties": {                                 "addressPrefix": "[parameters('virtualNetworkSubnet1Prefix')]"                             }                         }                     ]                 }             }         ],         "outputs": {         }     }

The VM itself is pretty straightforward. The code below deploys a virtual NIC and then the VM. The NIC needs to be created first and is then bound to the VM when the latter is deployed. The snippet has the nested resources for the VM extensions removed. I’ll show you those in a bit.

{   "apiVersion": "2015-05-01-preview",   "dependsOn": [   ],   "location": "[parameters('resourceLocation')]",   "name": "[variables('vmDCNicName')]",   "properties": {     "ipConfigurations": [       {         "name": "ipconfig1",         "properties": {           "privateIPAllocationMethod": "Static",           "privateIPAddress": "[variables('vmDCIPAddress')]",           "subnet": {             "id": "[variables('vmDCSubnetRef')]"           }         }       }     ]   },   "tags": {     "displayName": "vmDCNic"   },   "type": "Microsoft.Network/networkInterfaces" }, {   "name": "[variables('vmDCName')]",   "type": "Microsoft.Compute/virtualMachines",   "location": "[parameters('resourceLocation')]",   "apiVersion": "2015-05-01-preview",   "dependsOn": [     "[concat('Microsoft.Network/networkInterfaces/', variables('vmDCNicName'))]"   ],   "tags": {     "displayName": "vmDC"   },   "properties": {     "hardwareProfile": {       "vmSize": "[variables('vmDCVmSize')]"     },     "osProfile": {       "computername": "[variables('vmDCName')]",       "adminUsername": "[parameters('adminUsername')]",       "adminPassword": "[parameters('adminPassword')]"     },     "storageProfile": {       "imageReference": {         "publisher": "[variables('windowsImagePublisher')]",         "offer": "[variables('windowsImageOffer')]",         "sku": "[variables('windowsImageSKU')]",         "version": "latest"       },       "osDisk": {         "name": "[concat(variables('vmDCName'), '-os-disk')]",         "vhd": {           "uri": "[concat('http://', variables('storageAccountName'), '', variables('vmStorageAccountContainerName'), '/', variables('vmDCName'), 'os.vhd')]"         },         "caching": "ReadWrite",         "createOption": "FromImage"       },       "dataDisks": [         {           "vhd": {             "uri": "[concat('http://', variables('storageAccountName'), '', variables('vmStorageAccountContainerName'),'/', variables('vmDCName'),'data-1.vhd')]"           },           "name": "[concat(variables('vmDCName'),'datadisk1')]",           "createOption": "empty",           "caching": "None",           "diskSizeGB": "[variables('windowsDiskSize')]",           "lun": 0         }       ]     },     "networkProfile": {       "networkInterfaces": [         {           "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmDCNicName'))]"         }       ]     }   },   "resources": [   ] }

The NIC is pretty simple. I tell it the name of the subnet on my network I want it to connect to and I tell it that I want to use a static private IP address, and what that address is. The VM resource then references the NIC in the networkProfile section.

The VM itself is built using the Windows Server 2012 R2 Datacentre image provided by Microsoft. That is specified in the imageReference section. There are lots of VM images and each is reference by publisher (in this case MicrosoftWindowsServer), offer (WindowsServer) and SKU (2012-R2-Datacenter). I’m specifying ‘latest’ as the version but you can be specific if you have built your deployment around a specific version of an image. They are updated regularly to include patches… There are a wide range of images available to save you time. My full deployment makes use of a SQL Server image and I’m also playing with a BizTalk image right now. It’s much easier than trying to sort out the install of products yourself, and the licence cost of the software gets rolled into the VM charge.

We need to add a second disk to our VM to hold the domain databases. The primary disk on a VM has read and write caching enabled. Write caching exposes us to risk of corrupting our domain database in the event of a failure, so I’m adding a second disk and setting the caching on that to none. It’s all standard stuff at this point.

I’m not going to describe the IaaSDiagnostics extension. The markup for that is completely default as provided by the tooling when you add the resource. Let’s move on to the DSC extension.

{   "type": "Microsoft.Compute/virtualMachines/extensions",   "name": "[concat(variables('vmDCName'),'/InstallDomainController')]",   "apiVersion": "2015-05-01-preview",   "location": "[parameters('resourceLocation')]",   "dependsOn": [     "[resourceId('Microsoft.Compute/virtualMachines', variables('vmDCName'))]",     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/IaaSDiagnostics')]"   ],   "properties": {     "publisher": "Microsoft.Powershell",     "type": "DSC",     "typeHandlerVersion": "1.7",     "settings": {       "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",       "configurationFunction": "[variables('vmDCConfigurationFunction')]",       "properties": {         "domainName": "[variables('domainName')]",         "adminCreds": {           "userName": "[parameters('adminUsername')]",           "password": "PrivateSettingsRef:adminPassword"         }       }     },     "protectedSettings": {       "items": {         "adminPassword": "[parameters('adminPassword')]"       }     }   } }

I should mention at this point that I am nesting the extensions within the VM resources section. You don’t need to do this – they can be resources at the same level as the VM. However, my experience from deploying this lot a gazillion times is that if I nest the extensions I get a more robust deployment. Pulling them out of the VM appears to increase the chance of the extension failing to deploy.

The DSC extension will do different things depending on the OS version of Windows you are using. For my 2012 R2 VM it will install the necessary required software to use Desired State Configuration and it will then reboot the VM before applying any config. On the current Server 2016 preview images that installation and reboot isn’t needed as the pre-reqs are already installed.

The DSC extension needs to copy your DSC modules and configuration onto the VM. That’s specified in the modulesURL setting and it expects a zip archive with your stuff in it. I’ll show you that when we look at the DSC config in detail later. The configurationFunction setting specifies the PowerShell file that contains the function and the name of the configuration in that file to use. I have all the DSC configs in one file so I pass in DSCvmConfigs.ps1\DomainController (note the escaped slash).

Finally, we specify the parameters that we want to pass into our PowerShell DSC function. We’re specifying the name of our Domain and the credentials for our admin account.

Once the DSC module has completed I need to do final configuration with standard PowerShell scripts. The customScript Extension is our friend here. Documentation on this is somewhat sparse and I’ve already blogged on the subject to help you. The template code is below:

{   "type": "Microsoft.Compute/virtualMachines/extensions",   "name": "[concat(variables('vmDCName'),'/dcScript')]",   "apiVersion": "2015-05-01-preview",   "location": "[parameters('resourceLocation')]",   "dependsOn": [     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",     "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/InstallDomainController')]"   ],   "properties": {     "publisher": "Microsoft.Compute",     "type": "CustomScriptExtension",     "typeHandlerVersion": "1.4",     "settings": {       "fileUris": [         "[concat(parameters('_artifactsLocation'),'/DomainController.ps1', parameters('_artifactsLocationSasToken'))]",         "[concat(parameters('_artifactsLocation'),'/', parameters('_artifactsLocationSasToken'))]",         "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"       ],       "commandToExecute": "[concat('powershell.exe -file DomainController.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -tsServiceName ',variables('vmTWAPpublicipDnsName'), ' -resourceLocation "', parameters('resourceLocation'),'"')]"     }   } }

The module downloads the files I need which in this case is a zip containing the PSPKI PowerShell modules that I reference to perform a bunch of certificate functions, a module of my own functions and finally the DomainController.ps1 script that is executed by the extension. You can’t specify parameters for your script in the extension (and in fact you can’t call the script directly – you have to execute the powershell.exe command yourself) so you can see that I build the commandToExecute from using a bunch of variables and string concatenation.

The DSC Modules

I need to get the DSC modules I use onto the VM. To save my going mad, that means I include the module source in the Visual Studio solution. Over time I’ve evolved a folder structure within the solution to separate templates, DSC files and script files. You can see this structure in the screenshot below.

dsc modules_thumb[2]

I keep all the DSC together like this because I can then simply zip all the files in the DSC folder structure to give me the archive that is deployed by the DSC extension. In the picture you will see that there are a number of .ps1 files in the root. Originally I created separate files for the DSC configuration of each of my VMs. I then collapsed those into the DSCvmConfigs.ps1 files and I simply haven’t removed the others from the project.

My DomainController configuration function began life as the example code from the three server SharePoint template on Github and I have since extended and modified it. The code is shown below:

configuration DomainController  {     param     (          [Parameter(Mandatory)]         [String]$DomainName,          [Parameter(Mandatory)]         [System.Management.Automation.PSCredential]$Admincreds,          [String]$DomainNetbiosName=(Get-NetBIOSName -DomainName $DomainName),         [Int]$RetryCount=20,         [Int]$RetryIntervalSec=30     )           Import-DscResource -ModuleName xComputerManagement, cDisk, xDisk, xNetworking, xActiveDirectory, xSmbShare, xAdcsDeployment     [System.Management.Automation.PSCredential ]$DomainCreds = New-Object System.Management.Automation.PSCredential ("${DomainName}$($Admincreds.UserName)", $Admincreds.Password)     $Interface=Get-NetAdapter|Where Name -Like "Ethernet*"|Select-Object -First 1     $InteraceAlias=$($Interface.Name)      Node localhost     {         WindowsFeature DNS          {              Ensure = "Present"              Name = "DNS"         }         xDnsServerAddress DnsServerAddress          {              Address        = ''              InterfaceAlias = $InteraceAlias             AddressFamily  = 'IPv4'         }         xWaitforDisk Disk2         {              DiskNumber = 2              RetryIntervalSec =$RetryIntervalSec              RetryCount = $RetryCount         }         cDiskNoRestart ADDataDisk         {             DiskNumber = 2             DriveLetter = "F"         }         WindowsFeature ADDSInstall          {              Ensure = "Present"              Name = "AD-Domain-Services"         }           xADDomain FirstDS          {             DomainName = $DomainName             DomainAdministratorCredential = $DomainCreds             SafemodeAdministratorPassword = $DomainCreds             DatabasePath = "F:NTDS"             LogPath = "F:NTDS"             SysvolPath = "F:SYSVOL"         }         WindowsFeature ADCS-Cert-Authority         {                Ensure = 'Present'                Name = 'ADCS-Cert-Authority'                DependsOn = '[xADDomain]FirstDS'         }         WindowsFeature RSAT-ADCS-Mgmt         {                Ensure = 'Present'                Name = 'RSAT-ADCS-Mgmt'                DependsOn = '[xADDomain]FirstDS'         }         File SrcFolder         {             DestinationPath = "C:src"             Type = "Directory"             Ensure = "Present"             DependsOn = "[xADDomain]FirstDS"         }         xSmbShare SrcShare         {             Ensure = "Present"             Name = "src"             Path = "C:src"             FullAccess = @("Domain Admins","Domain Computers")             ReadAccess = "Authenticated Users"             DependsOn = "[File]SrcFolder"         }         xADCSCertificationAuthority ADCS         {             Ensure = 'Present'             Credential = $DomainCreds             CAType = 'EnterpriseRootCA'             DependsOn = '[WindowsFeature]ADCS-Cert-Authority'                       }         WindowsFeature ADCS-Web-Enrollment         {             Ensure = 'Present'             Name = 'ADCS-Web-Enrollment'             DependsOn = '[WindowsFeature]ADCS-Cert-Authority'         }         xADCSWebEnrollment CertSrv         {             Ensure = 'Present'             Name = 'CertSrv'             Credential = $DomainCreds             DependsOn = '[WindowsFeature]ADCS-Web-Enrollment','[xADCSCertificationAuthority]ADCS'         }                    LocalConfigurationManager          {             DebugMode = $true             RebootNodeIfNeeded = $true         }    } }

The .ps1 file contains all the DSC configurations for my environment. The DomainController configuration starts with a list of parameters. These match the ones being passed in by the DSC extension, or have default or calculated values. The import-dscresource command specifies the DSC modules that the configuration needs. I have to ensure that any I am using are included in the zip files downloaded by the extension. I am using modules that configure disks, network shares, active directory domains and certificate services.

The node section then declares my configuration. You can set configurations for multiple hosts in a single DSC configuration block, but I’m only concerned with the host I’m on – localhost. Within the block I then declare what I want the configuration of the host to be. It’s the job of the DSC modules to apply whatever actions are necessary to set the configuration to that which I specify. Just like in our resource template, DSC settings can depend on one another if something needs to be done before something else.

This DSC configuration installs the windows features needed for creating a domain controller. It looks for the additional drive on the VM and assigns it the drive letter F. It creates the new Active Directory domain and places the domain database files on drive F. Once the domain is up and running I create a folder on drive C called src and share that folder. I’m doing that because I create two certificates later and I need to make them available to other machines in the domain. More on that in a bit. Finally, we install the certificate services features and configure a certificate authority. The LocalConfigurationManager settings turn on as much debug output as I can and tell the system that if any of the actions in my config demand a reboot that’s OK – restart as and when required rather than waiting until the end.

I’d love to do all my configuration with DSC but sadly there just aren’t the modules yet. There are some things I just can’t do, like creating a new certificate template in my CA and then generating some specific templates for my ADFS services that are on other VMs. I also can’t set file rights on a folder, although I can set rights on a share. Notice that I grant access to my share to Domain Computers. Both the DSC modules and the custom script extension command are run as the local system account. When I try to read files over the network that means I am connecting to the share as the Computer account and I need to grant access. When I create the DC there are no other VMs in the domain, so I use the Domain Computers group to make sure all my servers will be able to access the files.

Once the DC module completes I have a working domain with a certificate authority.

The Custom Scripts

As with my DSC modules, I keep all the custom scripts for my VMs in one folder within the solution. All of these need to be uploaded to Azure storage so I can access them with the extension and copy them to my VMs. The screenshot below shows the files in the solution. I have a script for each VM that needs one, which is executed by the extension. I then have a file of shared functions and a zip with supporting modules that I need.

custom scripts_thumb[2]

# # DomainController.ps1 # param (     $vmAdminUsername,     $vmAdminPassword,     $fsServiceName,     $tsServiceName,     $resourceLocation )  $password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force $credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN$vmAdminUsername", $password) Write-Verbose -Verbose "Entering Domain Controller Script" Write-Verbose -verbose "Script path: $PSScriptRoot" Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername" Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword" Write-Verbose -Verbose "fsServiceName: $fsServiceName" Write-Verbose -Verbose "tsServiceName: $tsServiceName" Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN" Write-Verbose -Verbose "resourceLocation: $resourceLocation" Write-Verbose -Verbose "==================================="      # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("DomainController Script Executed", $info_event, 5001)   Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {      param (         $workingDir,         $vmAdminPassword,         $fsServiceName,         $tsServiceName,         $resourceLocation     )     # Working variables     $serviceAccountOU = "Service Accounts"     Write-Verbose -Verbose "Entering Domain Controller Script"     Write-Verbose -verbose "workingDir: $workingDir"     Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"     Write-Verbose -Verbose "fsServiceName: $fsServiceName"     Write-Verbose -Verbose "tsServiceName: $tsServiceName"     Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"     Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"     Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"     Write-Verbose -Verbose "resourceLocation: $resourceLocation"     Write-Verbose -Verbose "==================================="       # Write an event to the event log to say that the script has executed.     $event = New-Object System.Diagnostics.EventLog("Application")     $event.Source = "tuServEnvironment"     $info_event = [System.Diagnostics.EventLogEntryType]::Information     $event.WriteEntry("In DomainController scriptblock", $info_event, 5001)      #go to our packages scripts folder     Set-Location $workingDir          $zipfile = $workingDir + ""     $destination = $workingDir     [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null     [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)      Import-Module .tuServDeployFunctions.ps1      #Enable CredSSP in server role for delegated credentials     Enable-WSManCredSSP -Role Server –Force      #Create OU for service accounts, computer group; create service accounts     Add-ADServiceAccounts -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU -password $vmAdminPassword     Add-ADComputerGroup -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU     Add-ADComputerGroupMember -group "tuServ Computers" -member ($env:COMPUTERNAME + '$')      #Create new web server cert template     $certificateTemplate = ($env:USERDOMAIN + "_WebServer")     Generate-NewCertificateTemplate -certificateTemplateName $certificateTemplate -certificateSourceTemplateName "WebServer"     Set-tsCertificateTemplateAcl -certificateTemplate $certificateTemplate -computers "tuServComputers"      # Generate SSL Certificates      $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+""     Generate-SSLCertificate -certificateSubject $fsCertificateSubject -certificateTemplate $certificateTemplate     $tsCertificateSubject = $tsServiceName + ""     Generate-SSLCertificate -certificateSubject $tsCertificateSubject -certificateTemplate $certificateTemplate      # Export Certificates     $fsCertExportFileName = $fsCertificateSubject+".pfx"     $fsCertExportFile = $workingDir+""+$fsCertExportFileName     Export-SSLCertificate -certificateSubject $fsCertificateSubject -certificateExportFile $fsCertExportFile -certificatePassword $vmAdminPassword     $tsCertExportFileName = $tsCertificateSubject+".pfx"     $tsCertExportFile = $workingDir+""+$tsCertExportFileName     Export-SSLCertificate -certificateSubject $tsCertificateSubject -certificateExportFile $tsCertExportFile -certificatePassword $vmAdminPassword      #Set permissions on the src folder     $acl = Get-Acl c:src     $acl.SetAccessRuleProtection($True, $True)     $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain Computers","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")     $acl.AddAccessRule($rule)     $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Authenticated Users","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")     $acl.AddAccessRule($rule)     Set-Acl c:src $acl       #Create src folder to store shared files and copy certs to it     Copy-Item -Path "$workingDir*.pfx" c:src  } -ArgumentList $PSScriptRoot, $vmAdminPassword, $fsServiceName, $tsServiceName, $resourceLocation

The domain controller script is shown above. There are a whole bunch of write-verbose commands that output debug which I can see through the Azure Resource Explorer as the script runs.

Pretty much the first thing I do here is an invoke-command. The script is running as local system and there’s not much I can actually do as that account. My invoke-command block runs as the domain administrator so I can get stuff done. Worth noting is that the invoke-command approach makes accessing network resources tricky. It’s not an issue here but it bit me with the ADFS and WAP servers.

I unzip the PSPKI archive that has been copied onto the server and load the modules therein. The files are downloaded to a folder that is in a structure including the version number of the script extension so I can’t be explicit. Fortunately I can use the $PSScriptRoot variable to workout that location and I pass it into the invoke-command as $workingDir. The PSKPI modules allow me to create a new certificate template on my CA so I can generate new certs with exportable private keys. I need the same certs on more than one of my servers so I need to be able to copy them around. I generate the certs and drop them into the src folder I created with DSC. I also set the rights on that src folder to grant Domain Computers and Authenticated Users access. The latter is probably overdoing it, since the former should do what I need, but I spent a good deal of time being stymied by this so I’m taking a belt and braces approach.

The key functions called by the script above are shown below. Held in my modules file, these are all focused on certificate functions and pretty much all depend on the PSPKI modules.

function Generate-NewCertificateTemplate {     [CmdletBinding()]     # note can only be run on the server with PSPKI eg the ActiveDirectory domain controller     param     (         $certificateTemplateName,         $certificateSourceTemplateName             )      Write-Verbose -Verbose "Generating New Certificate Template"           Import-Module .PSPKIpspki.psm1                  $certificateCnName = "CN="+$certificateTemplateName          $ConfigContext = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext          $ADSI = [ADSI]"LDAP://CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext"           $NewTempl = $ADSI.Create("pKICertificateTemplate", $certificateCnName)          $NewTempl.put("distinguishedName","$certificateCnName,CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext")           $NewTempl.put("flags","66113")         $NewTempl.put("displayName",$certificateTemplateName)         $NewTempl.put("revision","4")         $NewTempl.put("pKIDefaultKeySpec","1")         $NewTempl.SetInfo()          $NewTempl.put("pKIMaxIssuingDepth","0")         $NewTempl.put("pKICriticalExtensions","")         $NewTempl.put("pKIExtendedKeyUsage","")         $NewTempl.put("pKIDefaultCSPs","2,Microsoft DH SChannel Cryptographic Provider, 1,Microsoft RSA SChannel Cryptographic Provider")         $NewTempl.put("msPKI-RA-Signature","0")         $NewTempl.put("msPKI-Enrollment-Flag","0")         $NewTempl.put("msPKI-Private-Key-Flag","16842768")         $NewTempl.put("msPKI-Certificate-Name-Flag","1")         $NewTempl.put("msPKI-Minimal-Key-Size","2048")         $NewTempl.put("msPKI-Template-Schema-Version","2")         $NewTempl.put("msPKI-Template-Minor-Revision","2")         $NewTempl.put("msPKI-Cert-Template-OID","")         $NewTempl.put("msPKI-Certificate-Application-Policy","")         $NewTempl.SetInfo()          $WATempl = $ADSI.psbase.children | where {$_.Name -eq $certificateSourceTemplateName}         $NewTempl.pKIKeyUsage = $WATempl.pKIKeyUsage         $NewTempl.pKIExpirationPeriod = $WATempl.pKIExpirationPeriod         $NewTempl.pKIOverlapPeriod = $WATempl.pKIOverlapPeriod         $NewTempl.SetInfo()                  $certTemplate = Get-CertificateTemplate -Name $certificateTemplateName         Get-CertificationAuthority | Get-CATemplate | Add-CATemplate -Template $certTemplate | Set-CATemplate }  function Set-tsCertificateTemplateAcl {     [CmdletBinding()]     param     (     $certificateTemplate,     $computers     )      Write-Verbose -Verbose "Setting ACL for cert $certificateTemplate to allow $computers"     Write-Verbose -Verbose "---"          Import-Module .PSPKIpspki.psm1                  Write-Verbose -Verbose "Adding group $computers to acl for cert $certificateTemplate"         Get-CertificateTemplate -Name $certificateTemplate | Get-CertificateTemplateAcl | Add-CertificateTemplateAcl -User $computers -AccessType Allow -AccessMask Read, Enroll | Set-CertificateTemplateAcl  }  function Generate-SSLCertificate {     [CmdletBinding()]     param     (     $certificateSubject,     $certificateTemplate     )      Write-Verbose -Verbose "Creating SSL cert using $certificateTemplate for $certificateSubject"     Write-Verbose -Verbose "---"          Import-Module .PSPKIpspki.psm1      Write-Verbose -Verbose "Generating Certificate (Single)"         $certificateSubjectCN = "CN=" + $certificateSubject         # Version #1         $powershellCommand = "& {get-certificate -Template " + $certificateTemplate + " -CertStoreLocation Cert:LocalMachineMy -DnsName " + $certificateSubject + " -SubjectName " + $certificateSubjectCN + " -Url ldap:}"         Write-Verbose -Verbose $powershellCommand         $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)         $encodedCommand = [Convert]::ToBase64String($bytes)          Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand" }  function Export-SSLCertificate {     [CmdletBinding()]     param     (     $certificateSubject,     $certificateExportFile,     $certificatePassword     )      Write-Verbose -Verbose "Exporting cert $certificateSubject to $certificateExportFile with password $certificatePassword"     Write-Verbose -Verbose "---"      Import-Module .PSPKIpspki.psm1      Write-Verbose -Verbose "Exporting Certificate (Single)"              $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force         Get-ChildItem Cert:LocalMachineMy | where {$_.subject -match $certificateSubject -and $_.Subject -ne $_.Issuer} | Export-PfxCertificate -FilePath $certificateExportFile -Password $password  }

Making sure it’s reusable

One of the things I’m trying to do here is create a collection of reusable configurations. I can take my DC virtual machine config and make it the core of any number of deployments in future. Key stuff like domain names and machine names are always parameterised all the way through template, DSC and scripts. When Azure Stack arrives I should be able to use the same configuration on-prem and in Azure itself and we can use the same building blocks for any number of customer projects, even though it was originally built for an internal project.

There’s stuff I need to do here: I need to pull the vNet template directly into the DC template – there’s no need for it to be separate; I could do with trimming back some of the access rights I grant on the folders and shares that are unnecessary; you’ll also notice that I am configuring CredSSP which was part of my original attempt to sort out file access from within the invoke-command blocks and failed miserably.

A quick round of credits

Whilst most of this work has been myself, bashing my head against the desk for a while, it is built upon code created by other people who need to be referenced:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part One: The Environment

Part Two | Part Three | Part Four

Over the past month or two I’ve been creating an Azure Resource Template to deploy and environment which, previously, we’d created old-style PowerShell scripts to deploy. In theory, the Resource Template approach would make the deployment quicker, easier to trigger from tooling like Release Manager and make the code easier to read.

The aim is to deploy a number of servers that will host an application we are developing. This will allow us to easily provision test or demo environments into Azure making as much use of automation as possible. The application itself has a set of system requirements that means I have a good number of tasks to work through:

  1. We need our servers to be domain joined so we can manage security, service accounts etc.
  2. The application uses ADFS for authentication. You don’t just expose ADFS to the internet, so that means we need a Web Application Proxy (WAP) server too.
  3. ADFS, WAP and our application need to use secure connections. We want to be able to deploy lots of these, so things like hostnames and FQDNs for services need to be flexible. That means using our own Certificate Services which we need to deploy.
  4. We need a SQL server for our application’s data. We’ll need some additional drives on this to store data and we need to make sure our service accounts have appropriate access.
  5. Our application is hosted in IIS, so we need a web server as well.
  6. Only servers that host internet-accessible services will get public IP addresses.

We already had scripts to do this the old way. I planned to reuse some of that code, and follow the decisions we made around the environment:

  • All VMs would use standard naming, with an environment-specific prefix. The same prefix would be used for other resources. For example, a prefix of env1 means the storage account is env1storage, the network is env1vnet, the Domain Controller VM is env1dc, etc. The AD domain we created would use the prefix is it’s named (so env1.local).
  • All public IPs would use the Azure-assigned DNS name for our services – no corporate DNS. The prefix would be used in conjunction with role when specifying the name for the cloud service.
  • DSC would be used wherever possible. After that, custom PowerShell scripts would be used. The aim was to configure each machine individually and not use remote PowerShell between servers unless absolutely necessary.

We’d also hit a few problems when creating the old approach, so I hoped to reuse the same solutions:

  • There is very little PowerShell to manage certificate services and certificates. There is an incredibly useful set of modules known as PSPKI which we utilise to create certificate templates and cert requests. This would need to be used in conjunction with our own custom scripts, so it had to be deployed to the VMs somehow.

Azure Resources In The Deployment

Things have actually moved on in terms of the servers I am now deploying (only to get more complex!) but it’s easier to detail the environment as originally planned and successfully deployed.

  • Storage Account. Needed for the hard drives of the multiple virtual machines.
  • Virtual Network. A single subnet for all VMs.
  • Domain Controller
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively. It will add the ADDS and ADCS roles and create the domain.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration. It will create certificate templates and generate certs for services.
  • ADFS Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Add the ADFS role and domain-join the VM.
      • CustomScriptExtension. Will configure ADFS – copying the cert from the DC and creating the federation service.
  • WAP Server
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the WAP service to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Copy the cert from the DC and configure WAP to publish the federation service hosted on the ADFS server.
  • SQL Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine. Requires an additional virtual two hard disks to store DBs and logs.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. It turns out the DSC for SQL is pretty good. We can do lots of configuration with it, to the extent of not needing the custom script extension.
  • Web Server
    • NetworkInterface. Will attach to the virtual network.
    • Virtual Machine.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.
  • WAP Server 2
    • NetworkInterface. Will attach to the virtual network.
    • Public IP Address. Will publish the web server-hosted services to the internet.
    • Load Balancer. Will connect the server to the public IP address, allowing us to add other servers to the service later if needed.
    • Virtual Machine. Requires an additional virtual hard disk to store domain databases.
      • Diagnostics Extension. Will enable the diagnostics on the Azure VM so we can see things like CPU and disk access through the Azure Portal.
      • DSC Extension. Will do as much configuration as we can, declaratively.
      • CustomScriptExtension. Will deploy PowerShell scripts to perform additional configuration.

Environment-specific Settings

The nice thing about having a cookie-cutter environment is that there are very few things that will vary between deployments and that means very few parameters in our template. We will need to set the prefix for all our resource names, the location for our resources, and because we want to be flexible we will set the admin username and password.

An Immediate Issue: Network Configuration

Right from the gate we have a problem to solve. When you create a virtual network in Azure, it provides IP addresses to the VMs attached to it. As part of that, the DNS server address is given to the VMs. By default that is an Azure DNS service that allows VMs to resolve external domain names. Our environment will need the servers to be told the IP address of the domain controller as it will provide local DNS services essential to the working of the Active Directory Domain. In our old scripts we simple reconfigured the network to specify the DC’s address after we configured the DC.

In the Resource Template world we can reconfigure the vNet by applying new settings to the resource from our template. However, once we have created the vNet in our template we can’t have another resource with the same name in the same template. The solution is to create another template with our new settings and to call that from our main template as a nested deployment. We can pass the IP address of the DC into that template as a parameter and we can make the nested deployment depend on the DC being deployed, which means it will happen after the DC has been promoted to be the domain controller.

Deployment Order

One of the nicest things about Resource Templates is that when you trigger a deployment, the Azure Resource Manager parses your template and tries to deploy the resources as efficiently as possible. If you need things to deploy in a sequence you need to specify dependencies in your resources, otherwise they will all deploy in parallel.

In this environment, I need to deploy the storage account and virtual network before any of the VMs. They don’t depend on each other however, so can be pushed out first, in parallel.

The DC gets deployed next. I need to fully configure this before any other VMs are created because they need to join our domain, and our network has to be reconfigured to hand out the IP address of the DC.

Once the DC is done, the network gets reconfigured with a nested deployment.

In theory, we should be able to deploy all our other VMs in parallel, providing we can apply our configuration in sequence which should be possible if we set the dependencies correctly for our extension resources (DSC and customScriptExtension).

Configuration for VMs can be mostly in parallel except: The WAP server configuration depends on the ADFS server being fully configured.

Attempt One: Single Template

I spent a long time creating, testing and attempting to debug this as a single template (except for our nested deployment to reconfigure the vNet). Let me spare you the pain by listing the problems:

  • The template is huge: Many hundreds of lines. Apart from being hard to work with, that really slows down the Visual Studio tooling.
  • Right now a single template with lots and lots of resources seems unreliable. I could use an identical template for multiple deployments and I would get random failures deploying different VMs or get a successful deploy with no rhyme or reason to it.
  • Creating VM extension resources with complex dependencies seems to cause deployment failures. At first I used dependencies in the extensions for the VMs outside of the DC to define my deployment order. I realised after some pain that this was much more prone to failure than if I treated the whole VM as a block. I also discovered that placing the markup for the extensions within the resources block of the VM itself improved reliability.
  • A single deployment takes over an hour. That makes debugging individual parts difficult and time-consuming.

Attempt Two: Multiple Nested Deployments

I now have a rock-solid, reliable deployment. I’ve achieved this by moving each VM and it’s linked resources (NIC, Load Balancer, Public IP) into separate templates. I have a master template that calls the ‘children’ with dependencies limited to one or more of the other nested deployments. The storage account and initial vNet deploy are part of the master template.

The upside of this has been manifold: Each template is shorter and simpler with far fewer variables  now each only deploys a single VM. I can also choose to deploy a single ‘child’ if I want to, within an already deployed environment. This allows me to test and debug more quickly and easily.

Problem One: CustomScriptExtension

When I started this journey there was little documentation around and Resource Manager was in preview. I really struggled to get the CustomScriptExtension for VMs working.All I had to work with were examples using PowerShell to add VM extensions and they were just plain wrong for the Resource Template Approach. Leaning on the Linux equivalent and a lot of testing and poking got things sorted and I’ve written up how the extension currently works.

Problem Two: IaaSDiagnostics

Right now, this one isn’t fixed. I am correctly deploying the IaaSDiagnostics extension into the VMs, and it appears to be correctly configured and working properly. However, the VM blades in the Azure Portal are adamant that diagnostics are not configured. This looks like a bug in the Portal and I’m hoping it will be resolved by the team soon.

Configuring the Virtual Machines

That’s about it for talking about the environment as a whole. I’m going to write up some of the individual servers separately as there were multiple hurdles to jump in configuring them. Stay tuned.

Windows Store App Notifications, the Notification Hub and Background tasks

This article aims to talk about Windows Store Notifications and the Windows Azure Notifications Hub and it will attempt to collate the various articles in a single place to help you build notifications into your app.

In order for you to get an understanding of Windows notifications look at the following article

Introduction to Push Notifications – this provides a good overview of how push notifications work. To summarise the important bits.

1. Your store app needs to register with the Windows Notification Service to retrieve a unique URI for your instance of the app. Ideally you do this each time the app starts.

2. If the URI has changed then you need to notify your service that there is a new URI. Note: This URI expires every 30 days so your app needs to notify your service that this has been changed.

3. Your service sends notifications to this unique URI

You may have noticed above that I mentioned “Your service”. This is a critical piece of the notification mechanism and there are a number of ways to build this service. If you are not comfortable building backend services or you want something up and running quickly then mobile services might be the way to go for you. Here’s a tutorial that gets you started with mobile services

If, like me, you already have a source of data and a service then you will probably want to wire in notifications into your existing service. depending upon how many devices you have using your app may dictate the method that you get the notifications onto the users device. there are a number of options:

  1. Local updates
  2. Push Notifications
  3. Periodic Notifications

Local updates require the creation of a background task that Windows runs periodically that calls into your data service, retrieves the data to put on the tiles and sends out tile notifications using the Windows Store app SDK

Updating live tiles from a background task – Provides a tutorial on building a background task for your Windows Store App. this tutorial is for timer tasks but it can easily be used for push notification tasks. The bits that are likely to change are the details of the run method, the task registration and the package manifest.

Two more important links that you will require when you are dealing with notifications:

Tile template catalogue

Toast template catalogue

These two catalogues are important as they provide you with details of the xml you need for each type of notifications

Push notifications are sent through the Windows Notification Service to your device.

You can send notifications to your device from your service by creating a notification and sending it to each of the devices registered to your service via the Windows Notification Service.

If you have a large number of devices running your app then you will probably want to use the Windows Azure Notification Hub. This is the simplest way to manage notifications to your application as the notification hub handles scaling, managing of the device registration and also iterating around each device to send the notifications out. The Notification hub will also allow you to send notifications to Windows Phone, Apple and Android devices. To get started with the notification hubs follow this tutorial:

The nice feature of the notification hub is that is makes the code needed to send notifications simple.


NotificationHubClient hub = NotificationHubClient.CreateClientFromConnectionString(“<your notification hub connection string>”, “<your hub name>”);


var toast = @”<toast><visual><binding template=””ToastText01″”><text id=””1″”>Hello from a .NET App!</text></binding></visual></toast>”;


await hub.SendWindowsNativeNotificationAsync(toast);

Compare this to the code to send the notification without the hub:


byte[] contentInBytes = Encoding.UTF8.GetBytes(xml);



HttpWebRequest request = HttpWebRequest.Create(uri) asHttpWebRequest;

request.Method =



“X-WNS-Type”, notificationType);

request.ContentType = contentType;


“Authorization”, String.Format(“Bearer {0}”, accessToken.AccessToken));



using (Stream requestStream = request.GetRequestStream())

requestStream.Write(contentInBytes, 0, contentInBytes.Length);



In addition you will need to retrieve the list of devices that are registered for push notifications and iterate around the list to send this to each device. You will also require a service that receives the registrations and stores them in a data store. You need to manage the scalability of these services. On the down side the notification hub is charged per message which means the more often you send notifications the greater the costs where as hosting a service is load based and the notifications will be sent out slower as the number of devices increases but this would generally be a lower cost. If you also take into account that you will need to send out notifications for each tile size and that will increase the activity count on the notification hub for each tile size (currently 3).

[Update: You can send out a single notification for all tile sizes rather than 3 separate notifications by adding a binding for each tile size in your xml see for more details]

It is possible to send custom notifications to your app which can be received directly in the app or by using a background task. These are called Raw notifications. In order to receive raw notifications in a background task your app needs to be configured to display on the start screen. However Raw Notifications can be received in your app whilst it is running when it is not configured to display on the start screen. A Raw Notification is a block of data up to 5KB in size and can be anything you want.

The following code will send a raw notifications using the notifications hub:


string rawNotification = prepareRAWPayload();


Notification notification = new Microsoft.ServiceBus.Notifications.WindowsNotification(rawNotification);


“X-WNS-Cache-Policy”, “cache”);


“X-WNS-Type”, “wns/raw”);

notification.ContentType =




var outcome = await hub.SendNotificationAsync(notification);

In order to receive Raw Notifications in your app you need to add an event to the channel you retrieve from the Windows Notification Service:


var channel = awaitPushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();


channel.PushNotificationReceived += channel_PushNotificationReceived;


And then handle the notification received:


privatevoid channel_PushNotificationReceived(PushNotificationChannel sender, PushNotificationReceivedEventArgs args)



switch (args.NotificationType)











Note: the content of the notification is the block of data that you sent out.

Sample background task for Raw Notifications is here:

Guidelines for Raw Notifications can be found here:

Periodic notifications also require a service but the application periodically calls into a service to retrieve the tile notifications without needing to process the source data and then create the notifications locally. details about how to use periodic notifications can be found here:

In summary Windows Store application notifications can be send to the app in a variety of ways and the mechanism you choose will depend upon how quick and how many notifications are required. Push notifications allow notifications to be sent whenever they are ready to send. Periodic and Local updates are pull notifications and require a service to be available to pull the data from. All of these will require some sort of service and all have an associated costs. The notifications hub is a useful tool to assist with notifications and it can be useful to manage the device connections as well as sending out notifications to multiple device type. It does however come at a cost and you need to work out whether it is a cost effective mechanism for your solution.

Gadgeteer, Signal R, WebAPI & Windows Azure

After a good night in Hereford at the Smart Devs User Group and my presentation at DDDNorth

Here are the links from my presentation and some from questions asked:



Web API:

The Signal-R chat example can be found at:

Windows Azure Pricing Calculator :

Signal-R Scaleout using Service bus, SQL Server or Redis:

The Windows Azure Training Kit:

Gadgeteer Modules:

Fex Spider Starter Kit:


In addition to these links I have more from my presentation at the DareDevs user group in Warrington

It is possible to drive a larger display from Gadgeteer using a VGA adapter. You use this the same way that the Display-T35 works using the SimpleGraphics interface for example.

VB eBook – Learn to Program with Visual Basic and Gadgeteer

Fez Cerberus Tinker Kit: 

Enabling Modern Apps

I’ve just finished presenting my talk on “Successfully Adopting the Cloud: TfGM Case Study”and there were a couple of questions that I said I would clarify.

1. What are the limits for the numbers of subscriptions per service bus topic. the answer is 2000. further details can be found at:

2. what are the differences between Windows Azure SQL database and SQL Server 2012. The following pages provide the details:

Supported T-SQL:

Partially supported T-SQL:

Unsupported T-SQL:

Guidelines and Limitations:

3. Accessing the TfGM open data site requires you to register as a developer at:

Thanks to everyone who attended I hope you found it useful.

Handling A Topic Dead Letter Queue in Windows Azure Service Bus

Whilst working on a project in which we we using the Topics on Windows Azure Service Bus, we noticed that our subscription queues (when viewed from the Windows Azure Management portal) didn’t seem to be empty even though our subscription queue processing code was working correctly. On closer inspection we found that our subscription queue was empty and the numbers in the management portal against the subscription were messages that had automatically faulted and had been moved into the Dead Letter queue.

The deadletter queue is a separate queue that allows messages that fail to be processed to be stored and analysed. The address of the deadletter queue is slightly different from your subscription queue and is the form:

YourTopic/Subscriptions/YourSubscription/ $DeadLetterQueue

for a subscription and

YourQueue/$DeadLetterQueue for a queue

Luckily you don’t have to remember this as there are helpful methods to retrieve the address for you:

SubscriptionClient.FormatDeadLetterPath(subscriptionClient.TopicPath, messagesSubscription.Name);

To create a subscription to the deadletter queue you need to append /$DeadLetterQueue to the subscription name when you create the subscription client

Once you have this address you can connect to the dead letter queue in the same way you would connect to the subscription queue. Once a deadletter brokered message is received the properties of the message should contain error information highlighting why it has failed. The message should also contain the message body from the original message. By default the subscription will move a faulty message to the dead letter queue after 10 attempts to deliver. You can also move the message yourself and put in sensible data in the properties if it fails to be processed by calling the DeadLetter method on the BrokeredMessage. The DeadLetter method allows you to pass in your own data to explain why the message has failed.

The DeadLetter can be deleted in the same was as a normal message by calling the Complete() method on the received dead letter message

Here is an example of retrieving a dead lettered message from a subscription queue

var baseAddress = Properties.Settings.Default.ServiceBusNamespace; var issuerName = Properties.Settings.Default.ServiceBusUser; var issuerKey = Properties.Settings.Default.ServiceBusKey;  Uri namespaceAddress = ServiceBusEnvironment.CreateServiceUri("sb", baseAddress, string.Empty);  this.namespaceManager = new NamespaceManager(namespaceAddress,                             TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerKey)); this.messagingFactory = MessagingFactory.Create(namespaceAddress,                             TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerKey)); var topic = this.namespaceManager.GetTopic(Properties.Settings.Default.TopicName); if (topic != null) {      if (!namespaceManager.SubscriptionExists(topic.Path,                                    Properties.Settings.Default.SubscriptionName))     {         messagesSubscription = this.namespaceManager.CreateSubscription(topic.Path,                                              Properties.Settings.Default.SubscriptionName);     }     else     {         messagesSubscription = namespaceManager.GetSubscription(topic.Path,                                              Properties.Settings.Default.SubscriptionName);     } } if (messagesSubscription != null) {     SubscriptionClient subscriptionClient = this.messagingFactory.CreateSubscriptionClient(                                             messagesSubscription.TopicPath,                                             messagesSubscription.Name, ReceiveMode.PeekLock);     // Get the Dead Letter queue path for this subscription     var dlQueueName = SubscriptionClient.FormatDeadLetterPath(subscriptionClient.TopicPath,                                              messagesSubscription.Name);     // Create a subscription client to the deadletter queue     SubscriptionClient deadletterSubscriptionClient = messagingFactory.CreateSubscriptionClient(                                            subscriptionClient.TopicPath,                                              messagesSubscription.Name + "/$DeadLetterQueue");      // Get the dead letter message     BrokeredMessage dl = deadletterSubscriptionClient.Receive(new TimeSpan(0, 0, 300));     // get the properties     StringBuilder sb = new StringBuilder();     sb.AppendLine(string.Format("Enqueue Time {0}", dl.EnqueuedTimeUtc));     foreach (var props in dl.Properties)     {         sb.AppendLine(string.Format("{0}:{1}", props.Key, props.Value));     }     dl.Complete(); }

MVP Cloud OS Community Week

Black Marble are participating in the Microsoft MVP Cloud OS Community Week. During the week commencing 9th September there will be daily events held at Cardinal Place, Victoria, London. Richard and I along with other MVPs will be participating in the event on Friday 13th Sept which is titled “Enables Modern Business Applications”. The sessions will include developing services for modern apps, case studies, ALM and much more. Further details and registration can be found at:

Registration for this specific day can be found at:

Pricing Changes to Windows Azure

Scott Guthrie made an announcement on his blog about changes to the way Windows Azure is priced

The main two changes that will affect most people are

  1. Per minute Billing
  2. No charge for turned off VMs

Prior to this announcement you would be charge for Windows Azure usage in hour blocks of time which meant that if you used 5 minutes of compute time or 55 minutes of compute time you were charged for the full hour, similarly if you used 1 hour 5 minutes you were charged for 2 hours. This change now means that you will be charged for 5 minutes, 55 minutes and 1 hour 5 minutes, matching your actual usage, This doesn’t affect you if you have a fixed usage but for those who take advantage of the easy scalability of Windows Azure this could have a significant impact on your costs.

By far the biggest cost saving is the ability to turn off a VM and not be charged for its usage. Prior to this change you would need to undeploy the solution in order to not be charged and just turning it off would still incur charges, This allows for systems that are in staging, dev/test scenarios, training systems etc to be left deployed and configured but turned off without being charged. Saving both time and money. I have a number of customers where this change alone should half their monthly bills as they have systems running that are not needed all the time but can not be undeployed because of the time taken to deploy and configure them. The ability to turn off VMs will provide both customers and developers with flexibility whilst reducing costs. Always a good thing.

Windows Azure and SignalR with Gadgeteer

I’ve been playing with Gadgeteer ( for a while now and I am a big fan of the simple way we can build embedded hardware applications with high functionality. We have a proof of concept device that includes a Colour touch screen, RFID reader and an Ethernet connections. This device is capable of connecting to a web api REST service which we have hosted in Windows Azure and we can use this service to retrieve data from our service depending upon the RFID code that is read. This works well but there are times when we would like to notify the device when something has changed. SignalR seems to be the right technology for this as it removes the need to write polling code in your application.

Gadgeteer uses the .Net Micro framework which is a cut down .Net framework and doesn’t support the ASP.NET SignalR libraries. As we can use web api using the micro framework using the WebRequest classes,  I wondered what was involved to get SignalR working on my Gadgeteer device.

The first problem was to work out the protocol used by SignalR and after a short while trawling the web for details of the protocol I gave up and got my old friend fiddler out to see what was really happening.

After creating a SignalR service I connected my working example to the signalR hub running on my local IIS..

The first thing that pleased me was that the protocol looked fairly simple. It starts with a negotiate which is used to return a token which is needed for the actual connection.

GET /signalr/negotiate?_=1369908593886

Which returns some JSON:


I used this JSON to pull out the connection id and connection token. This was the first tricky part with the .Net Micro framework. There is not the same support for JSON serialisation you get with the full framework plus the string functions are limited as well. For this I used basic string functions using Substring and IndexOf as follows:

int index = negJson.IndexOf("""+token+"":""); if (index != -1) {     // Extracts the exact JSON value for then name represented by token     int startindex = index + token.Length + 4;     int endindex = negJson.IndexOf(""", startindex);     if (endindex != -1)     {         int length = endindex - startindex;         stringToExtract = negJson.Substring(startindex, length);     } }

With the correct token received Fiddler led me to the actual connection of signalR:

GET /signalr/connect?transport=webSockets&connectionToken=yourtoken&connectionData=%5B%7B%22name%22%3A%22chathub%22%7D%5D&tid=2 HTTP/1.1

Looking at this I could determine that I needed to pass in the token I retrieved from negotiate, the transport type and the name of the hub I want to connect to. After a bit of investigating I used the transport of longPolling.

Now as I think I understood the protocol, I tried to implement it in SignalR. The first issue that arose was what to send with the negotiate call. I figured that this was some sort of id of the client that is trying to connect so I decided to use the current tick count. This seemed to work and I guess that as long as my devices don’t connect at exactly the same time then Signal R would work. I’ve had no problems so far with this.

Upon connecting to the hub I needed to create a separate thread to handle signalR so that the main device wouldn’t stop running whilst the connection to the SignalR hub was waiting for a response. Once a response is received the response returns with a block of JSON data appropriate to the SignalR message being received. This needs to be decoded and passed onto the application. You then need to reconnect back to the SignalR hub. The period between receiving data and then reconnecting back to the hub needs to be small. Whilst the message is being processed it cannot receive any more message and may miss some data. I retrieve the response stream and then pass the processing of the stream to a separate thread so that I can reconnect to the hub as fast as possible.

This is not a full implementation of SignalR on the .Net Micro-framework but it is the implementation of a simple client and can be used fairly successfully on the Gadgeteer device. I still need to do a little more work to try to speed up the connections as it is possible to miss some data.

The SignalR hub is hosted on a Windows Azure website along side the web api service which allows both web, Windows 8 and Gadgeteer applications to work side by side.

Gadgeteer has opened up another avenue for development and helps us to provide more variety of devices in a solution