BM-Bloggers

The blogs of Black Marble staff

DPM 2012 R2 UR7 Known Issue

There’s a known issue with Update Rollup 7 for System Center Data Protection Manager 2012 R2 that stops expired recovery points being removed, thus leading (eventually) to DPM consuming all available disk space attached to it. This leads to messages such as:

DPM does not have sufficient storage space available...

and

image

Which mean that new recovery points are not being created and therefore changes are not being backed up.

The fix, which involves replacing the ‘pruneshadowcopiesDpm2010.ps1’ file with a corrected version, can be downloaded from https://www.microsoft.com/en-in/download/details.aspx?id=48694

The procedure is:

  1. Ensure that you are running DPM 2012 R2 UR7 (version 4.2.1338.0) from the ‘About DPM’ menu item under the ‘Action’ menu.
  2. Download the revised pruneshadowcopiesDpm2010.ps1 file from the URL above.
  3. Copy the original file to another location (just in case!)
  4. Replace the original pruneshadowcopiesDpm2010.ps1 with the one downloaded from the URL above. On one of our servers (that was upgraded from 2012 to 2012 R2), this location was C:\Program Files\Microsoft System Center 2012\DPM\DPM\bin and on a new installation, this location was C:\Program Files\Microsoft System Center 2012 R2\DPM\DPM\bin.
  5. Allow the system to run the PowerShell script at midnight (the default time) and the old recovery points should be removed.
  6. You may need to shrink the disk space allocated to the recovery point if DPM has automatically grown the disk space allocated. To to this, for each protection group, right click the protection group and click ‘Modify disk allocation’. Against each entry for the protection group, click ‘shrink’. DPM will calculate the new volume size. Click OK to complete the process.

Note: Repeated small shrink operations cause free space fragmentation, so use with care.

Additional notes: UR7 was re-released to fix this issue, so if you updated your DPM system after August 25th, you should be okay. The original script looks like this:

image

The modified version looks like this:

image

Azure API Management - Securing a Web API hosted as an Azure Web App using client certificates

Azure Api Management acts as a security proxy to 1 or more web services (hosted separately). The intention is that developers will request resources via Azure API Management that will forward the request onto the appropriate web API given appropriate permissions. It is important that the underlying Web Service cannot be accessed directly by an end user (and therefore bypassing the API management security).  To achieve this we are using a client certificate to validate that the request has come from the API management site.

clip_image002

This post describes how to:

  1. Create a self signed certificate
  2. Configure certificates in Azure API Management
  3. Configure the Azure Web App to enable client certificates
  4. Add code to validate a certificate has been provided

 

1) Create a self signed certificate

Run the following example commands to create a self-signed certificate. Tweak the values as required:

makecert.exe -n "CN=Your Issuer Name" -r -sv TempCA.pvk TempCA.cer

makecert.exe -pe -ss My -sr CurrentUser -a sha1 -sky exchange -n "CN=Your subject Name" -eku 1.3.6.1.5.5.7.3.2 -sk SignedByCA -ic TempCA.cer -iv TempCA.pvk

 

2) Configure certificates in Azure API Management

-> Open the Azure API management portal
-> Click APIs –> Choose the appropriate API that you want to secure -> Security -> Manage Certificates
-> Upload the certificate

 

clip_image002

A policy should automatically have been added that intercepts requests and appends the appropriate certificate information before forwarding the request to that Web API. Check the policies section to confirm it has been added. The following screenshot shows the expected policy definition

image

3) Configure the Azure Web App to enable client certificates

Given the Web Api is deployed as an azure App then there is no direct access to IIS to enable client certificate security. Instead configuration must be done either using the Azure REST api; or using the Azure Resource Explorer (preview).

A description of using the REST api is here.

To update is via resource explorer follow these steps:

  • go to https://resources.azure.com/, and log in as you would to the Azure portal
  • find the relevant site, either using the search box or by navigating the tree
  • Switch mode from ‘Read Only’ to ‘Read/Write’
  • click the Edit button
  • Set "clientCertEnabled": true
  • Click the PUT button at the top

 

4) Add some code to the web api to check the client certificate

This can be done a number of ways. However the following code will perform these checks:

  • · Check time validity of certificate
  • · Check subject name of certificate
  • · Check issuer name of certificate
  • · Check thumbprint of certificate

 

public class BasicCertificateValidator : IValidateCertificates { public bool IsValid(X509Certificate2 certificate) { if (certificate == null) return false; string issuerToMatch = "CN=Your Issuer Name"; string subjectToMatch = "CN=Your subject Name"; string certificateThumbprint = "thumbprintToIndentifyYourCertificate"; // 1. Check time validity of certificate TimeZoneInfo myTimeZone = TimeZoneInfo.FindSystemTimeZoneById("GMT Standard Time"); var now = TimeZoneInfo.ConvertTimeFromUtc(DateTime.UtcNow, myTimeZone); DateTime notBefore = certificate.NotBefore; DateTime notAfter = certificate.NotAfter; if (DateTime.Compare(now, notBefore) < 0 && DateTime.Compare(now, notAfter) > 0) return false; // 2. Check subject name of certificate if (!certificate.Subject.Contains(subjectToMatch)) return false; // 3. Check issuer name of certificate if (!certificate.Issuer.Contains(issuerToMatch)) return false; // 4. Check thumprint of certificate if (!certificate.Thumbprint.Trim().Equals(certificateThumbprint, StringComparison.InvariantCultureIgnoreCase)) return false; return true; } public IPrincipal GetPrincipal(X509Certificate2 certificate2) { return new GenericPrincipal(new GenericIdentity(certificate2.Subject), new[] { "User" }); } }

To check on each request to the Web Api add a custom DelegatingHandler. Extend a class from from System.Net.Http.DelegatingHandler and override the SendAsync Message. To access the certificate information you can query the HTTPRequestMessage

public class CertificateAuthHandler : DelegatingHandler { public IValidateCertificates CertificateValidator { get; set; } public CertificateAuthHandler() { CertificateValidator = new BasicCertificateValidator(); } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { X509Certificate2 certificate = request.GetClientCertificate(); if (certificate == null || !CertificateValidator.IsValid(certificate)) { return Task<HttpResponseMessage>.Factory.StartNew(() => request.CreateResponse(HttpStatusCode.Unauthorized)); } Thread.CurrentPrincipal = CertificateValidator.GetPrincipal(certificate); return base.SendAsync(request, cancellationToken); } }

To add the custom message handler to all new requests add the following code to App_Start/WebApiConfig.cs

GlobalConfiguration.Configuration.MessageHandlers.Add(new CertificateAuthHandler());

 

Happy Coding!

Jon

Running Typemock Isolator based tests in TFS vNext build

Updated 22 Mar 2016 This task is available in the VSTS Marketplace

Typemock Isolator provides a way to ‘mock the un-mockable’, such as sealed private classes in .NET, so can be a invaluable tool in unit testing. To allow this mocking Isolator interception has to be started before any unit tests are run and stopped when completed. For a developer this is done automatically within the Visual Studio IDE, but on build systems you have to run something to do this as part of your build process. Typemock provide documentation and tools for common build systems such as MSBuild, Jenkins, Team City and TFS XAML builds. However, they don’t provide tools or documentation on getting it working with TFS vNext build, so I had to write my own vNext build Task to do the job, wrapping Tmockrunner.exe provided by Typemock which handles the starting and stopping of mocking whilst calling any EXE of your choice.

tmockrunner <name of the test tool to run> <and parameters for the test tool>

Microsoft provide a vNext build task to run the vstest.console.exe. This task generates all the command line parameters needed depending on the arguments provided for the build task. The source for this can be found on any build VM (in the [build agent folder]\tasks folder after a build has run) or on Microsoft’s vso agent github repo. I decided to use this as my starting point, swapping the logic to generate the tmockrunner.exe command line as opposed to the one for vstest.console.exe. You can find my task on my github. It has been developed in the same manner as the Microsoft provided tasks, this means the process to build and use the task is

  1. Clone the repo https://github.com/rfennell/vNextBuild.git
  2. In the root of the repo use gulp to build the task
  3. Use tfx to upload the task to your TFS or VSO instance

See http://realalm.com/2015/07/31/uploading-a-custom-build-vnext-task/ and http://blog.devmatter.com/custom-build-tasks-in-vso/ for a good walkthroughs of building tasks, the process is the same for mine and Microsoft’s tasks.

IMPORTANT NOTE: This task is only for on premises TFS vNext build instances connected to either an on premises TFS or VSO. Typemock at the time of writing this post does not support VSO’s host build agents. This is because the registration of Typemock requires admin rights on the build agent which you only get if you ‘own’ the build agent VM

Once the task is installed on your TFS/VSO server you can use it in vNext builds. You will note that it takes all the same parameters as the standard VSTest task (it will usually be used as a replacement when there are Typemock Isolator based tests in a solution). The only addition to the parameters are the three parameters for Typemock licensing and deployment location.

image

Using the task allows tests that require Typemock Isolator to pass. So test that if run with the standard VSTest task give

image

With the new task gives

image

WebDeploy, parameters.xml transforms and nLog settings

I have been trying to parameterise the SQL DB connection string used by nLog when it is defined in a web.config file of a web site being deployed via Release Management and  WebDeploy i.e. I wanted to select and edit the bit highlighted of my web.config file

<configuration>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <targets async="true">
      <target xsi:type="Database" name="SQL" dbProvider="System.Data.SqlClient" connectionString="Data Source=myserver;Database=mydb;Persist Security Info=True;Pooling=False" keepConnection="true" commandText="INSERT INTO [Logs](ID, TimeStamp, Message, Level, Logger, Details, Application, MachineName, Username) VALUES(newid(), getdate(), @message, @level, @logger, @exception, @application, @machineName, @username)">

        <parameter layout="${message}" name="@message"></parameter>
        …….
 

The problem I had was that the xpath query I was using was not returning the nLog node because the nLog node has a namespace defined. This means we can’t just use a query in the form

<parameter name="NLogConnectionString" description="Description for NLogConnectionString" defaultvalue="__NLogConnectionString__" tags="">
  <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/nlog/targets/target[@name='SQL']/@connectionString" />
</parameter>

I needed to use

<parameter name="NLogConnectionString" description="Description for NLogConnectionString" defaultvalue="__NLogConnectionString__" tags="">
  <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/*[local-name() = 'nlog']/*[local-name() = 'targets']/*[local-name() = 'target' and @name='SQL']/@connectionString" />
</parameter>

So more complex, but it does work. Hopefully this will save others the time I wasted working it out today

Complex Azure Odyssey Part Four: WAP Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. Part Three talks about deploying my ADFS server and in this final part I will show you how to configure the WAP server that faces the outside world.

The Template

The WAP server is the only one in my environment that faces the internet. Because of this the deployment is more complex. I’ve also added further complexity because I want to be able to have more than one WAP server in future, so there’s a load balancer deployed too. You can see the resource outline in the screenshot below:

wap template json

The internet-facing stuff means we need more things in our template. First up is our PublicIPAddress:

{
  "name": "[variables('vmWAPpublicipName')]",
  "type": "Microsoft.Network/publicIPAddresses",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [ ],
  "tags": {
    "displayName": "vmWAPpublicip"
  },
  "properties": {
    "publicIPAllocationMethod": "Dynamic",
    "dnsSettings": {
      "domainNameLabel": "[variables('vmWAPpublicipDnsName')]"
    }
  }
},

This is pretty straightforward stuff. The nature of my environment means that I am perfectly happy with a dynamic IP that changes if I stop and then start the environment. Access will be via the hostname assigned to that IP and I use that hostname in my ADFS service configuration and certificates. Azure builds the hostname based on a pattern and I can use that pattern in my templates, which is how I’ve created the certs when I deploy the DC and configure the ADFS service all before I’ve deployed the WAP server.

That public IP address is then bound to our load balancer which provides the internet-endpoint for our services:

{
  "apiVersion": "2015-05-01-preview",
  "name": "[variables('vmWAPlbName')]",
  "type": "Microsoft.Network/loadBalancers",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"
  ],
  "properties": {
    "frontendIPConfigurations": [
      {
        "name": "[variables('LBFE')]",
        "properties": {
          "publicIPAddress": {
            "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('vmWAPpublicipName'))]"
          }
        }
      }
    ],
    "backendAddressPools": [
      {
        "name": "[variables('LBBE')]"
      }
    ],
    "inboundNatRules": [
      {
        "name": "[variables('RDPNAT')]",
        "properties": {
          "frontendIPConfiguration": {
            "id": "[variables('vmWAPLbfeConfigID')]"
          },
          "protocol": "tcp",
          "frontendPort": "[variables('rdpPort')]",
          "backendPort": 3389,
          "enableFloatingIP": false
        }
      },
      {
        "name": "[variables('httpsNAT')]",
        "properties": {
          "frontendIPConfiguration": {
            "id": "[variables('vmWAPLbfeConfigID')]"
          },
          "protocol": "tcp",
          "frontendPort": "[variables('httpsPort')]",
          "backendPort": 443,
          "enableFloatingIP": false
        }
      }
    ]
  }
}

There’s a lot going on in here so let’s work through it. First of all we connect our public IP address to the load balancer. We then create a back end configuration which we will later connect our VM to. Finally we create a set of NAT rules. I need to be able to RDP into the WAP server, which is the first block. The variables define the names of my resources. You can see that I specify the ports – external through a variable that I can change, and internal directlym because I need that to be the same each time because that’s what my VMs listen on. You can see that each NAT rule is associated with the frontendIPConfiguration – opening the port to the outside world.

The next step is to create a NIC that will hook our VM up to the existing virtual network and the load balancer:

{
  "name": "[variables('vmWAPNicName')]",
  "type": "Microsoft.Network/networkInterfaces",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/publicIPAddresses/', variables('vmWAPpublicipName'))]",
    "[concat('Microsoft.Network/loadBalancers/',variables('vmWAPlbName'))]"
  ],
  "tags": {
    "displayName": "vmWAPNic"
  },
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[variables('vmWAPIPAddress')]",
          "subnet": {
            "id": "[variables('vmWAPSubnetRef')]"
          },
          "loadBalancerBackendAddressPools": [
            {
              "id": "[variables('vmWAPBEAddressPoolID')]"
            }
          ],
          "loadBalancerInboundNatRules": [
            {
              "id": "[variables('vmWAPRDPNATRuleID')]"
            },
            {
              "id": "[variables('vmWAPhttpsNATRuleID')]"
            }
          ]

        }
      }
    ]
  }
}

Here you can see that the NIC is connected to a subnet on our virtual network with a static IP that I specify in a variable. It is then added to the load balancer back end address pool and finally I need to specify which of the NAT rules I created in the load balancer are hooked up to my VM. If I don’t include the binding here, traffic won’t be passed to my VM (as I discovered when developing this lot – I forgot to wire up https and as a result couldn’t access the website published by WAP!).

The VM itself is basically the same as my ADFS server. I use the same Windows Sever 2012 R2 image, have a single disk and I’ve nested the extensions within the VM because that seems to work better than not doing:

{
  "name": "[variables('vmWAPName')]",
  "type": "Microsoft.Compute/virtualMachines",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/networkInterfaces/', variables('vmWAPNicName'))]",
  ],
  "tags": {
    "displayName": "vmWAP"
  },
  "properties": {
    "hardwareProfile": {
      "vmSize": "[variables('vmWAPVmSize')]"
    },
    "osProfile": {
      "computername": "[variables('vmWAPName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "[variables('windowsImagePublisher')]",
        "offer": "[variables('windowsImageOffer')]",
        "sku": "[variables('windowsImageSKU')]",
        "version": "latest"
      },
      "osDisk": {
        "name": "[concat(variables('vmWAPName'), '-os-disk')]",
        "vhd": {
          "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmWAPName'), 'os.vhd')]"
        },
        "caching": "ReadWrite",
        "createOption": "FromImage"
      }
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmWAPNicName'))]"
        }
      ]
    }
  },
  "resources": [
    {
      "type": "extensions",
      "name": "IaaSDiagnostics",
      "apiVersion": "2015-06-15",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]"
      ],
      "tags": {
        "displayName": "[concat(variables('vmWAPName'),'/vmDiagnostics')]"
      },
      "properties": {
        "publisher": "Microsoft.Azure.Diagnostics",
        "type": "IaaSDiagnostics",
        "typeHandlerVersion": "1.4",
        "autoUpgradeMinorVersion": "true",
        "settings": {
          "xmlCfg": "[base64(variables('wadcfgx'))]",
          "StorageAccount": "[variables('storageAccountName')]"
        },
        "protectedSettings": {
          "storageAccountName": "[variables('storageAccountName')]",
          "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",
          "storageAccountEndPoint": "https://core.windows.net/"
        }
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(variables('vmWAPName'),'/WAPserver')]",
      "apiVersion": "2015-05-01-preview",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[resourceId('Microsoft.Compute/virtualMachines', variables('vmWAPName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/IaaSDiagnostics')]"
      ],
      "properties": {
        "publisher": "Microsoft.Powershell",
        "type": "DSC",
        "typeHandlerVersion": "1.7",
        "settings": {
          "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
          "configurationFunction": "[variables('vmWAPConfigurationFunction')]",
          "properties": {
            "domainName": "[variables('domainName')]",
            "adminCreds": {
              "userName": "[parameters('adminUsername')]",
              "password": "PrivateSettingsRef:adminPassword"
            }
          }
        },
        "protectedSettings": {
          "items": {
            "adminPassword": "[parameters('adminPassword')]"
          }
        }
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(variables('vmWAPName'),'/wapScript')]",
      "apiVersion": "2015-05-01-preview",
      "location": "[parameters('resourceLocation')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmWAPName'),'/extensions/WAPserver')]"

      ],
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.4",
        "settings": {
          "fileUris": [
            "[concat(parameters('_artifactsLocation'),'/WapServer.ps1', parameters('_artifactsLocationSasToken'))]",
            "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
            "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
          ],
          "commandToExecute": "[concat('powershell.exe -file WAPServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -adfsServerName ',variables('vmADFSName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
        }
      }
    }
  ]
}

The DSC and custom script extension are in the same vein as with ADFS. I can get the features on with DSC and then I need to configure stuff with my script.

The DSC Modules

As with the other two servers, the files copied into the VM by the DSC extension are common. I then call the appropriate configuration for the WAP server, held within my common configuration file. The WAP server configuration is shown below:

configuration WAPserver
{
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName xComputerManagement,xActiveDirectory
    
    Node localhost
    {
        WindowsFeature WAPInstall 
        { 
            Ensure = "Present" 
            Name = "Web-Application-Proxy"
        }  
        WindowsFeature WAPMgmt 
        { 
            Ensure = "Present" 
            Name = "RSAT-RemoteAccess"
        }  
        WindowsFeature ADPS
        {
            Name = "RSAT-AD-PowerShell"
            Ensure = "Present"
        } 
        xWaitForADDomain DscForestWait 
        { 
            DomainName = $DomainName 
            DomainUserCredential= $Admincreds
            RetryCount = $RetryCount 
            RetryIntervalSec = $RetryIntervalSec 
            DependsOn = "[WindowsFeature]ADPS"      
        }
        xComputer DomainJoin
        {
            Name = $env:COMPUTERNAME
            DomainName = $DomainName
            Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
            DependsOn = "[xWaitForADDomain]DscForestWait"
        }

        LocalConfigurationManager 
        {
            DebugMode = $true    
            RebootNodeIfNeeded = $true
        }
    }     
}

As with ADFS, the configuration joins the domain and adds the required features for WAP. Note that I install the RSAT tools for Remote Access. If you don’t do this, you can’t configure WAP because the powershell modules aren’t installed!

The Custom Scripts

The WAP script performs much of the same work as the ADFS script. I need to install the certificate for my service, so that’s copied onto the server by the script before it runs an invoke-command block. The main script is run as the local system account and can successfully connect to the DC as the computer account. I then run my invoke-command with domain admin credentials so I can configure WAP, and once inside the invoke-command block network access gets tricky, so I don’t do it!

#
# WapServer.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $adfsServerName,
    $vmDCname,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)

Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "adfsServerName: $adfsServerName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="


    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("WAPserver Script Executed", $info_event, 5001)


    $srcPath = "\\"+ $vmDCname + "\src"
    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $fsCertificateSubject+".pfx"
    $certPath = $srcPath + "\" + $fsCertFileName

    #Copy cert from DC
    write-verbose -Verbose "Copying $certpath to $PSScriptRoot"
#        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}"
#        Write-Verbose -Verbose $powershellCommand
#        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
#        $encodedCommand = [Convert]::ToBase64String($bytes)

#        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
        copy-item $certPath -Destination $PSScriptRoot -Verbose

Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $domainCredential,
        $adfsServerName,
        $fsServiceName,
        $vmDCname,
        $resourceLocation
    )
    # Working variables

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In WAPserver scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)
    
    Import-Module .\tuServDeployFunctions.ps1

    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $workingDir + "\" + $fsCertificateSubject+".pfx"

    Write-Verbose -Verbose "Importing sslcert $fsCertFileName"
    Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword

    $fsIpAddress = (Resolve-DnsName $adfsServerName -type a).ipaddress
    Add-HostsFileEntry -ip $fsIpAddress -domain $fsCertificateSubject


    Set-WapConfiguration -credential $domainCredential -fedServiceName $fsCertificateSubject -certificateSubject $fsCertificateSubject


} -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $adfsServerName, $fsServiceName, $vmDCname, $resourceLocation

The script modifies the HOSTS file on the server so it can find the ADFS service and then configures the Web Application Proxy for that ADFS service. It’s worth mentioning at this point the $fsCertificateSubject, which is also my service name. When we first worked on this environment using the old Azure PowerShell commands the name of the public endpoint was always <something>.cloudapp.net. When I use the new Resource Manager model I discovered that is now <something>.<Azure Location>.cloudapp.azure.com. The <something> is in our control – we specify it. The <Azure Location> isn’t quite, and is the resource location for our deployment (converted to lowercase with no spaces). You’ll find that same line of code in the DC and ADFS scripts and it’s creating the hostname our service will use based on the resource location specified in the template, passed into the script as a parameter.

The functions called by that script are shown below:

function Import-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateFileName,
        $certificatePassword
    )    

        Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"
        Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1

        Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName
        # import it
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Import-PfxCertificate –FilePath ($certificateFileName) cert:\localMachine\my -Password $password

}

function Add-HostsFileEntry
{
[CmdletBinding()]
    param
    (
        $ip,
        $domain
    )

    $hostsFile = "$env:windir\System32\drivers\etc\hosts"
    $newHostEntry = "`t$ip`t$domain";

        if((gc $hostsFile) -contains $NewHostEntry)
        {
            Write-Verbose -Verbose "The hosts file already contains the entry: $newHostEntry.  File not updated.";
        }
        else
        {
            Add-Content -Path $hostsFile -Value $NewHostEntry;
        }
}

function Set-WapConfiguration
{
[CmdletBinding()]
Param(
$credential,
$fedServiceName,
$certificateSubject
)

Write-Verbose -Verbose "Configuring WAP Role"
Write-Verbose -Verbose "---"

    #$certificate = (dir Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject}).thumbprint
    $certificateThumbprint = (get-childitem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint

    # install WAP
    Install-WebApplicationProxy –CertificateThumbprint $certificateThumbprint -FederationServiceName $fedServiceName -FederationServiceTrustCredential $credential

}

What’s Left?

This sequence of posts has talked about Resource Templates and how I structure mine based on my experience of developing and repeatedly deploying a pretty complex environment. It’s also given you specific config advice for doing the same as me: Create a Domain Controller and Certificate Authority, create an ADFS server and publish that server via a Web Application Proxy. If you only copy the stuff so far you’ll have an isolated environment that you can access via the WAP server for remote management.

I’m still working on this, however. I have a SQL server to configure. It turns out that DSC modules for SQL are pretty rich and I’ll blog on those at some point. I am also adding a BizTalk server. I suspect that will involve more on the custom script side. I then need to deploy my application itself, which I haven’t even begun yet (although the guys have created a rich set of automation PowerShell scripts to deal with the deployment).

Overall, I hope you take away from this series of posts just how powerful Azure Resource Templates can bee when pushing out IaaS solutions. I haven’t even touched on the PaaS components of Azure, but they can be dealt with in the same way. The need to learn this stuff is common across IT, Dev and DevOps, and it’s really interesting and fun to work on (if frustrating at times). I strongly encourage you to go play!

Credits

As with the previous posts, stuff I’ve talked about has been derived in part from existing resources:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Three: ADFS Server

Part One of this series covered the project itself and the overall template structure. Part Two went through how I deploy the Domain Controller in depth. This post will focus on the next server in the chain: The ADFS server that is required to enable authentication in the application which will eventually be installed on this environment.

The Template

The nested deployment template for the ADFS server differs little from my DC template. If anything, it’s even simpler because we don’t have to reconfigure the virtual network after deploying the VM. The screenshot below shots the JSON outline for the template.

adfs template json

You can see that it follows the same pattern as the DC template in part two. I have a VM, a NIC that it depends on and which is attached to our virtual network, and I have VM extensions within the VM itself to enable diagnostics, push a DSC configuration to the VM and execute a custom PowerShell script.

I went through the template construction in detail with the DC, so here I’ll simply show the resources code for you. The VM uses the same Windows Server base image as the DC but doesn’t need the extra disk that we attached to the DC.

"resources": [
  {
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
    ],
    "location": "[parameters('resourceLocation')]",
    "name": "[variables('vmADFSNicName')]",
    "properties": {
      "ipConfigurations": [
        {
          "name": "ipconfig1",
          "properties": {
            "privateIPAllocationMethod": "Static",
            "privateIPAddress": "[variables('vmADFSIPAddress')]",
            "subnet": {
              "id": "[variables('vmADFSSubnetRef')]"
            }
          }
        }
      ]
    },
    "tags": {
      "displayName": "vmADFSNic"
    },
    "type": "Microsoft.Network/networkInterfaces"
  },
  {
    "name": "[variables('vmADFSName')]",
    "type": "Microsoft.Compute/virtualMachines",
    "location": "[parameters('resourceLocation')]",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
      "[concat('Microsoft.Network/networkInterfaces/', variables('vmADFSNicName'))]",
    ],
    "tags": {
      "displayName": "vmADFS"
    },
    "properties": {
      "hardwareProfile": {
        "vmSize": "[variables('vmADFSVmSize')]"
      },
      "osProfile": {
        "computername": "[variables('vmADFSName')]",
        "adminUsername": "[parameters('adminUsername')]",
        "adminPassword": "[parameters('adminPassword')]"
      },
      "storageProfile": {
        "imageReference": {
          "publisher": "[variables('windowsImagePublisher')]",
          "offer": "[variables('windowsImageOffer')]",
          "sku": "[variables('windowsImageSKU')]",
          "version": "latest"
        },
        "osDisk": {
          "name": "[concat(variables('vmADFSName'), '-os-disk')]",
          "vhd": {
            "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmADFSName'), 'os.vhd')]"
          },
          "caching": "ReadWrite",
          "createOption": "FromImage"
        }
      },
      "networkProfile": {
        "networkInterfaces": [
          {
            "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmADFSNicName'))]"
          }
        ]
      }
    },
    "resources": [
      {
        "type": "extensions",
        "name": "IaaSDiagnostics",
        "apiVersion": "2015-06-15",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]"
        ],
        "tags": {
          "displayName": "[concat(variables('vmADFSName'),'/vmDiagnostics')]"
        },
        "properties": {
          "publisher": "Microsoft.Azure.Diagnostics",
          "type": "IaaSDiagnostics",
          "typeHandlerVersion": "1.4",
          "autoUpgradeMinorVersion": "true",
          "settings": {
            "xmlCfg": "[base64(variables('wadcfgx'))]",
            "StorageAccount": "[variables('storageAccountName')]"
          },
          "protectedSettings": {
            "storageAccountName": "[variables('storageAccountName')]",
            "storageAccountKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]",
            "storageAccountEndPoint": "https://core.windows.net/"
          }
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[concat(variables('vmADFSName'),'/ADFSserver')]",
        "apiVersion": "2015-05-01-preview",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[resourceId('Microsoft.Compute/virtualMachines', variables('vmADFSName'))]",
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/IaaSDiagnostics')]"
        ],
        "properties": {
          "publisher": "Microsoft.Powershell",
          "type": "DSC",
          "typeHandlerVersion": "1.7",
          "settings": {
            "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
            "configurationFunction": "[variables('vmADFSConfigurationFunction')]",
            "properties": {
              "domainName": "[variables('domainName')]",
              "vmDCName": "[variables('vmDCName')]",
              "adminCreds": {
                "userName": "[parameters('adminUsername')]",
                "password": "PrivateSettingsRef:adminPassword"
              }
            }
          },
          "protectedSettings": {
            "items": {
              "adminPassword": "[parameters('adminPassword')]"
            }
          }
        }
      },
      {
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[concat(variables('vmADFSName'),'/adfsScript')]",
        "apiVersion": "2015-05-01-preview",
        "location": "[parameters('resourceLocation')]",
        "dependsOn": [
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'))]",
          "[concat('Microsoft.Compute/virtualMachines/', variables('vmADFSName'),'/extensions/ADFSserver')]"
        ],
        "properties": {
          "publisher": "Microsoft.Compute",
          "type": "CustomScriptExtension",
          "typeHandlerVersion": "1.4",
          "settings": {
            "fileUris": [
              "[concat(parameters('_artifactsLocation'),'/AdfsServer.ps1', parameters('_artifactsLocationSasToken'))]",
              "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
              "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
            ],
            "commandToExecute": "[concat('powershell.exe -file AdfsServer.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -vmDCname ',variables('vmDCName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
          }
        }
      }
    ]
  },
  ]

The DSC Modules

All the DSC modules I need get zipped into the same archive file which is deployed by each DSC extension to the VMs. I showed you that in part one. For the ADFS server, the extension calls the configuration module DSCvmConfigs.ps1\\ADFSserver (note the escaped slash) – the ADFSserver configuration within my single DSCvmConfigs.ps1 file that holds all my configurations. As with the DC configuration, this is based on stuff held in the SharePoint farm template on GitHub.

configuration ADFSserver
{
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [String]$vmDCName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName xComputerManagement,xActiveDirectory

    Node localhost
    {
        WindowsFeature ADFSInstall 
        { 
            Ensure = "Present" 
            Name = "ADFS-Federation"
        }  
        WindowsFeature ADPS
        {
            Name = "RSAT-AD-PowerShell"
            Ensure = "Present"

        } 
        xWaitForADDomain DscForestWait 
        { 
            DomainName = $DomainName 
            DomainUserCredential= $Admincreds
            RetryCount = $RetryCount 
            RetryIntervalSec = $RetryIntervalSec 
            DependsOn = "[WindowsFeature]ADPS"      
        }
        xComputer DomainJoin
        {
            Name = $env:COMPUTERNAME
            DomainName = $DomainName
            Credential = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
            DependsOn = "[xWaitForADDomain]DscForestWait"
        }


        LocalConfigurationManager 
        {
            DebugMode = $true    
            RebootNodeIfNeeded = $true
        }
    }     
}

The DSC for my ADFS server does much less than that of the DC. It installs the Windows features I need (the RSAT-AD-PowerShell tools are needed by the xWaitForADDomain config), makes sure our domain is contactable and joins the server to it. Unfortunately there are no DSC resources around to configure our ADFS server at the moment and whilst I’m happy writing scripts to to that work, I’m less comfortable writing DSC modules right now!

The Custom Scripts

Once our DSC extension has joined the domain and added our features, it’s over to the customscript extension to configure the ADFS service. As with the DC, I copy down the script itself, a file with my own functions in and the PSPKI module.

#
# AdfsServer.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $vmDCname,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)

Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("ADFSserver Script Executed", $info_event, 5001)


    $srcPath = "\\"+ $vmDCname + "\src"
    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ",[System.String]::Empty)).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $fsCertificateSubject+".pfx"
    $certPath = $srcPath + "\" + $fsCertFileName

    #Copy cert from DC
    write-verbose -Verbose "Copying $certpath to $PSScriptRoot"
#        $powershellCommand = "& {copy-item '" + $certPath + "' '" + $workingDir + "'}"
#        Write-Verbose -Verbose $powershellCommand
#        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
#        $encodedCommand = [Convert]::ToBase64String($bytes)

#        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
        copy-item $certPath -Destination $PSScriptRoot -Verbose

Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $domainCredential,
        $fsServiceName,
        $vmDCname,
        $resourceLocation
    )
    # Working variables

    Write-Verbose -Verbose "Entering ADFS Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In ADFSserver scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)
    
    Write-Verbose -Verbose "Importing PSPKI"
    Import-Module .\tuServDeployFunctions.ps1


    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    $fsCertFileName = $workingDir + "\" + $fsCertificateSubject+".pfx"

    Write-Verbose -Verbose "Importing sslcert $fsCertFileName"
    Import-SSLCertificate -certificateFileName $fsCertFileName -certificatePassword $vmAdminPassword


    $adfsServiceAccount = $env:USERDOMAIN+"\"+"svc_adfs"
    $adfsPassword = ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force 
    $adfsCredentials = New-Object System.Management.Automation.PSCredential ($adfsServiceAccount, $adfsPassword) 
    $adfsDisplayName = "ADFS Service"

    Write-Verbose -Verbose "Creating ADFS Farm"
    Create-ADFSFarm -domainCredential $domainCredential -adfsName $fsCertificateSubject -adfsDisplayName $adfsDisplayName -adfsCredentials $adfsCredentials -certificateSubject $fsCertificateSubject


} -ArgumentList $PSScriptRoot, $vmAdminPassword, $credential, $fsServiceName, $vmDCname, $resourceLocation

The script starts by copying the certificate files from the DC. The script extension shells the script as the local system account, so it connects to the share on the DC as the computer account. I copy the files before I execute an invoke-command block that run as the domain admin. I do this because once I’m in that invoke-command block, network access becomes a real pain!

As you can see, this script doesn’t do a huge amount. Once in the invoke-command it unzips the PSPKI modules, imports the certificate it needs into the computer cert store and then calls a function to configure the ADFS service. The functions called by the script are below:

function Import-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateFileName,
        $certificatePassword
    )    

        Write-Verbose -Verbose "Importing cert $certificateFileName with password $certificatePassword"
        Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1

        Write-Verbose -Verbose "Attempting to import certificate" $certificateFileName
        # import it
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Import-PfxCertificate –FilePath ($certificateFileName) cert:\localMachine\my -Password $password

}

function Create-ADFSFarm
{
[CmdletBinding()]
param
(
$domainCredential,
$adfsName, 
$adfsDisplayName, 
$adfsCredentials,
$certificateSubject
)

    Write-Verbose -Verbose "In Function Create-ADFS Farm"
    Write-Verbose -Verbose "Parameters:"
    Write-Verbose -Verbose "adfsName: $adfsName"
    Write-Verbose -Verbose "certificateSubject: $certificateSubject"
    Write-Verbose -Verbose "adfsDisplayName: $adfsDisplayName"
    Write-Verbose -Verbose "adfsCredentials: $adfsCredentials"
    Write-Verbose -Verbose "============================================"

    Write-Verbose -Verbose "Importing Module"
    Import-Module ADFS
    Write-Verbose -Verbose "Getting Thumbprint"
    $certificateThumbprint = (get-childitem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject} | Sort-Object -Descending NotBefore)[0].thumbprint
    Write-Verbose -Verbose "Thumprint is $certificateThumbprint"
    Write-Verbose -Verbose "Install ADFS Farm"

    Write-Verbose -Verbose "Echo command:"
    Write-Verbose -Verbose "Install-AdfsFarm -credential $domainCredential -CertificateThumbprint $certificateThumbprint -FederationServiceDisplayName '$adfsDisplayName' -FederationServiceName $adfsName -ServiceAccountCredential $adfsCredentials"
    Install-AdfsFarm -credential $domainCredential -CertificateThumbprint $certificateThumbprint -FederationServiceDisplayName "$adfsDisplayName" -FederationServiceName $adfsName -ServiceAccountCredential $adfsCredentials -OverwriteConfiguration

}
There’s still stuff do do on the ADFS server once I get to deploying my application: I need to define relying party trusts and custom claims, for example. However, this deployment creates a working ADFS server that will authenticate users against my domain. It’s then published to the outside world safely by the Web Application Proxy role on my WAP server.

Credit Where It’s Due

Same as before – I stand on the shoulders of others to bring you this stuff:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

Complex Azure Template Odyssey Part Two: Domain Controller

In part one of this series of posts I talked about the project driving my creation of these Azure Resource Templates, the structure of the template and what resource I was deploying. This post will go through the deployment and configuration of the first VM which will become my domain controller and certificate server. In order to achieve my goals I need to deploy the VM, the DSC extension and finally the custom script extension to perform actions that current DSC modules can’t. I’ll show you the template code, the DSC code and the final scripts and talk about the gotchas I encountered on the way.

Further posts will detail the ADFS and WAP server deployments.

The Template

I’ve already talked about how I’ve structured this project: A core template calls a collection of nested templates – one per VM. The DC template differs from the rest in that it too calls a nested deployment to make changes to my virtual network. Other than that, it follows the same convention.

dc template json view_thumb[2]

The screenshot above is the JSON outline view of the template. Each of my nested VM templates follows the same pattern: The parameters block in each template is exactly the same. I’m using a standard convention for naming all my resources, so providing I pass the envPrefix parameter between each one I can calculate the name of any resource in the project. That’s important, as we’ll see in a moment. The variables block contains all the variables that the current template needs – things like the IP address that should be assigned or the image we use as our base for the VM. Finally, the resources section holds the items we are deploying to create the domain controller. This VM is isolated from the outside world so we need the VM itself and a NIC to connect it to our virtual network, nothing more. The network is created by the core template before it calls the DC template.

The nested deployment needs explaining. Once we’ve created our domain controller we need to make sure that all our other VMs receive the correct IP address for their DNS. In order to do that we have to reconfigure the virtual network that we have already deployed. The nested deployment here is an artefact of the original approach with a single template – it could actually be fully contained in the DC template.

To explain: We can only define a resource with a given type and name in a template once. Templates are declarative and describe how we want a resource to be configured. With our virtual network we want to reconfigure it after we have deployed subsequent resources. If we describe the network for a second time, the new configuration is applied to our existing resource. The problem is that we have already got a resource in our template for our network. We get around the problem by calling a nested deployment. That deployment is a copy of the network configuration, with the differences we need for our reconfiguration. In my original template which contained all the resources, that nested deployment depended on the DC being deployed and was then called. It had to be a nested deployment because the network was already in there once.

With my new model I could actually just include the contents of the network reconfiguration deployment directly in the DC template. I am still calling the nested resource simply because of the way I split my original template. The end result is the same. The VM gets created, then the DSC and script extensions run to turn it into a domain controller. The network template is then called to set the DNS IP configuration of the network to be the IP address of the newly-minted DC.

{
  "name": "tuServUpdateVnet",
  "type": "Microsoft.Resources/deployments",
  "apiVersion": "2015-01-01",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/dcScript')]"
  ],
  "properties": {
    "mode": "Incremental",
    "templateLink": {
      "uri": "[concat(variables('updateVNetDNSTemplateURL'), parameters('_artifactsLocationSasToken'))]",
      "contentVersion": "1.0.0.0"
    },
    "parameters": {
      "resourceLocation": { "value": "[parameters('resourceLocation')]" },
      "virtualNetworkName": { "value": "[variables('virtualNetworkName')]" },
      "virtualNetworkPrefix": { "value": "[variables('virtualNetworkPrefix')]" },
      "virtualNetworkSubnet1Name": { "value": "[variables('virtualNetworkSubnet1Name')]" },
      "virtualNetworkSubnet1Prefix": { "value": "[variables('virtualNetworkSubnet1Prefix')]" },
      "virtualNetworkDNS": { "value": [ "[variables('vmDCIPAddress')]" ] }
    }
  }
}

The code above is contained in my DC template. It calls the nested deployment through a URI to the template. That points to an azure storage container with all the resources for my deployment held in it. The template is called with a set of parameters that are mostly variables created in the DC template in accordance with the rules and patterns I’ve set. Everything is the same as the original network deployment with the exception of the DNS address which is to be set to the DC address. Below is the network template. Note that the parameter block defines parameters that match those being passed in. All names are case sensitive.

{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourceLocation": {
            "type": "string",
            "defaultValue": "West US",
            "allowedValues": [
                "East US",
                "West US",
                "West Europe",
                "North Europe",
                "East Asia",
                "South East Asia"
            ],
            "metadata": {
                "description": "The region to deploy the storage resources into"
            }
        },
        "virtualNetworkName": {
            "type": "string"
        },
        "virtualNetworkDNS": {
            "type": "array"
        },
        "virtualNetworkPrefix": {
            "type": "string"
        },
        "virtualNetworkSubnet1Name": {
            "type": "string"
        },
        "virtualNetworkSubnet1Prefix": {
            "type": "string"
        }
    },
        "variables": {
        },
        "resources": [
            {
                "name": "[parameters('virtualNetworkName')]",
                "type": "Microsoft.Network/virtualNetworks",
                "location": "[parameters('resourceLocation')]",
                "apiVersion": "2015-05-01-preview",
                "tags": {
                    "displayName": "virtualNetworkUpdate"
                },
                "properties": {
                    "addressSpace": {
                        "addressPrefixes": [
                            "[parameters('virtualNetworkPrefix')]"
                        ]
                    },
                    "dhcpOptions": {
                        "dnsServers": "[parameters('virtualNetworkDNS')]"
                    },

                    "subnets": [
                        {
                            "name": "[parameters('virtualNetworkSubnet1Name')]",
                            "properties": {
                                "addressPrefix": "[parameters('virtualNetworkSubnet1Prefix')]"
                            }
                        }
                    ]
                }
            }
        ],
        "outputs": {
        }
    }

The VM itself is pretty straightforward. The code below deploys a virtual NIC and then the VM. The NIC needs to be created first and is then bound to the VM when the latter is deployed. The snippet has the nested resources for the VM extensions removed. I’ll show you those in a bit.

{
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
  ],
  "location": "[parameters('resourceLocation')]",
  "name": "[variables('vmDCNicName')]",
  "properties": {
    "ipConfigurations": [
      {
        "name": "ipconfig1",
        "properties": {
          "privateIPAllocationMethod": "Static",
          "privateIPAddress": "[variables('vmDCIPAddress')]",
          "subnet": {
            "id": "[variables('vmDCSubnetRef')]"
          }
        }
      }
    ]
  },
  "tags": {
    "displayName": "vmDCNic"
  },
  "type": "Microsoft.Network/networkInterfaces"
},
{
  "name": "[variables('vmDCName')]",
  "type": "Microsoft.Compute/virtualMachines",
  "location": "[parameters('resourceLocation')]",
  "apiVersion": "2015-05-01-preview",
  "dependsOn": [
    "[concat('Microsoft.Network/networkInterfaces/', variables('vmDCNicName'))]"
  ],
  "tags": {
    "displayName": "vmDC"
  },
  "properties": {
    "hardwareProfile": {
      "vmSize": "[variables('vmDCVmSize')]"
    },
    "osProfile": {
      "computername": "[variables('vmDCName')]",
      "adminUsername": "[parameters('adminUsername')]",
      "adminPassword": "[parameters('adminPassword')]"
    },
    "storageProfile": {
      "imageReference": {
        "publisher": "[variables('windowsImagePublisher')]",
        "offer": "[variables('windowsImageOffer')]",
        "sku": "[variables('windowsImageSKU')]",
        "version": "latest"
      },
      "osDisk": {
        "name": "[concat(variables('vmDCName'), '-os-disk')]",
        "vhd": {
          "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmDCName'), 'os.vhd')]"
        },
        "caching": "ReadWrite",
        "createOption": "FromImage"
      },
      "dataDisks": [
        {
          "vhd": {
            "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'),'/', variables('vmDCName'),'data-1.vhd')]"
          },
          "name": "[concat(variables('vmDCName'),'datadisk1')]",
          "createOption": "empty",
          "caching": "None",
          "diskSizeGB": "[variables('windowsDiskSize')]",
          "lun": 0
        }
      ]
    },
    "networkProfile": {
      "networkInterfaces": [
        {
          "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmDCNicName'))]"
        }
      ]
    }
  },
  "resources": [
  ]
}

The NIC is pretty simple. I tell it the name of the subnet on my network I want it to connect to and I tell it that I want to use a static private IP address, and what that address is. The VM resource then references the NIC in the networkProfile section.

The VM itself is built using the Windows Server 2012 R2 Datacentre image provided by Microsoft. That is specified in the imageReference section. There are lots of VM images and each is reference by publisher (in this case MicrosoftWindowsServer), offer (WindowsServer) and SKU (2012-R2-Datacenter). I’m specifying ‘latest’ as the version but you can be specific if you have built your deployment around a specific version of an image. They are updated regularly to include patches… There are a wide range of images available to save you time. My full deployment makes use of a SQL Server image and I’m also playing with a BizTalk image right now. It’s much easier than trying to sort out the install of products yourself, and the licence cost of the software gets rolled into the VM charge.

We need to add a second disk to our VM to hold the domain databases. The primary disk on a VM has read and write caching enabled. Write caching exposes us to risk of corrupting our domain database in the event of a failure, so I’m adding a second disk and setting the caching on that to none. It’s all standard stuff at this point.

I’m not going to describe the IaaSDiagnostics extension. The markup for that is completely default as provided by the tooling when you add the resource. Let’s move on to the DSC extension.

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('vmDCName'),'/InstallDomainController')]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[resourceId('Microsoft.Compute/virtualMachines', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/IaaSDiagnostics')]"
  ],
  "properties": {
    "publisher": "Microsoft.Powershell",
    "type": "DSC",
    "typeHandlerVersion": "1.7",
    "settings": {
      "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
      "configurationFunction": "[variables('vmDCConfigurationFunction')]",
      "properties": {
        "domainName": "[variables('domainName')]",
        "adminCreds": {
          "userName": "[parameters('adminUsername')]",
          "password": "PrivateSettingsRef:adminPassword"
        }
      }
    },
    "protectedSettings": {
      "items": {
        "adminPassword": "[parameters('adminPassword')]"
      }
    }
  }
}

I should mention at this point that I am nesting the extensions within the VM resources section. You don’t need to do this – they can be resources at the same level as the VM. However, my experience from deploying this lot a gazillion times is that if I nest the extensions I get a more robust deployment. Pulling them out of the VM appears to increase the chance of the extension failing to deploy.

The DSC extension will do different things depending on the OS version of Windows you are using. For my 2012 R2 VM it will install the necessary required software to use Desired State Configuration and it will then reboot the VM before applying any config. On the current Server 2016 preview images that installation and reboot isn’t needed as the pre-reqs are already installed.

The DSC extension needs to copy your DSC modules and configuration onto the VM. That’s specified in the modulesURL setting and it expects a zip archive with your stuff in it. I’ll show you that when we look at the DSC config in detail later. The configurationFunction setting specifies the PowerShell file that contains the function and the name of the configuration in that file to use. I have all the DSC configs in one file so I pass in DSCvmConfigs.ps1\\DomainController (note the escaped slash).

Finally, we specify the parameters that we want to pass into our PowerShell DSC function. We’re specifying the name of our Domain and the credentials for our admin account.

Once the DSC module has completed I need to do final configuration with standard PowerShell scripts. The customScript Extension is our friend here. Documentation on this is somewhat sparse and I’ve already blogged on the subject to help you. The template code is below:

{
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "name": "[concat(variables('vmDCName'),'/dcScript')]",
  "apiVersion": "2015-05-01-preview",
  "location": "[parameters('resourceLocation')]",
  "dependsOn": [
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
    "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/InstallDomainController')]"
  ],
  "properties": {
    "publisher": "Microsoft.Compute",
    "type": "CustomScriptExtension",
    "typeHandlerVersion": "1.4",
    "settings": {
      "fileUris": [
        "[concat(parameters('_artifactsLocation'),'/DomainController.ps1', parameters('_artifactsLocationSasToken'))]",
        "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
        "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
      ],
      "commandToExecute": "[concat('powershell.exe -file DomainController.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -tsServiceName ',variables('vmTWAPpublicipDnsName'), ' -resourceLocation \"', parameters('resourceLocation'),'\"')]"
    }
  }
}

The module downloads the files I need which in this case is a zip containing the PSPKI PowerShell modules that I reference to perform a bunch of certificate functions, a module of my own functions and finally the DomainController.ps1 script that is executed by the extension. You can’t specify parameters for your script in the extension (and in fact you can’t call the script directly – you have to execute the powershell.exe command yourself) so you can see that I build the commandToExecute from using a bunch of variables and string concatenation.

The DSC Modules

I need to get the DSC modules I use onto the VM. To save my going mad, that means I include the module source in the Visual Studio solution. Over time I’ve evolved a folder structure within the solution to separate templates, DSC files and script files. You can see this structure in the screenshot below.

dsc modules_thumb[2]

I keep all the DSC together like this because I can then simply zip all the files in the DSC folder structure to give me the archive that is deployed by the DSC extension. In the picture you will see that there are a number of .ps1 files in the root. Originally I created separate files for the DSC configuration of each of my VMs. I then collapsed those into the DSCvmConfigs.ps1 files and I simply haven’t removed the others from the project.

My DomainController configuration function began life as the example code from the three server SharePoint template on Github and I have since extended and modified it. The code is shown below:

configuration DomainController 
{ 
   param 
   ( 
        [Parameter(Mandatory)]
        [String]$DomainName,

        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,

        [String]$DomainNetbiosName=(Get-NetBIOSName -DomainName $DomainName),
        [Int]$RetryCount=20,
        [Int]$RetryIntervalSec=30
    ) 
    
    Import-DscResource -ModuleName xComputerManagement, cDisk, xDisk, xNetworking, xActiveDirectory, xSmbShare, xAdcsDeployment
    [System.Management.Automation.PSCredential ]$DomainCreds = New-Object System.Management.Automation.PSCredential ("${DomainName}\$($Admincreds.UserName)", $Admincreds.Password)
    $Interface=Get-NetAdapter|Where Name -Like "Ethernet*"|Select-Object -First 1
    $InteraceAlias=$($Interface.Name)

    Node localhost
    {
        WindowsFeature DNS 
        { 
            Ensure = "Present" 
            Name = "DNS"
        }
        xDnsServerAddress DnsServerAddress 
        { 
            Address        = '127.0.0.1' 
            InterfaceAlias = $InteraceAlias
            AddressFamily  = 'IPv4'
        }
        xWaitforDisk Disk2
        {
             DiskNumber = 2
             RetryIntervalSec =$RetryIntervalSec
             RetryCount = $RetryCount
        }
        cDiskNoRestart ADDataDisk
        {
            DiskNumber = 2
            DriveLetter = "F"
        }
        WindowsFeature ADDSInstall 
        { 
            Ensure = "Present" 
            Name = "AD-Domain-Services"
        }  
        xADDomain FirstDS 
        {
            DomainName = $DomainName
            DomainAdministratorCredential = $DomainCreds
            SafemodeAdministratorPassword = $DomainCreds
            DatabasePath = "F:\NTDS"
            LogPath = "F:\NTDS"
            SysvolPath = "F:\SYSVOL"
        }
        WindowsFeature ADCS-Cert-Authority
        {
               Ensure = 'Present'
               Name = 'ADCS-Cert-Authority'
               DependsOn = '[xADDomain]FirstDS'
        }
        WindowsFeature RSAT-ADCS-Mgmt
        {
               Ensure = 'Present'
               Name = 'RSAT-ADCS-Mgmt'
               DependsOn = '[xADDomain]FirstDS'
        }
        File SrcFolder
        {
            DestinationPath = "C:\src"
            Type = "Directory"
            Ensure = "Present"
            DependsOn = "[xADDomain]FirstDS"
        }
        xSmbShare SrcShare
        {
            Ensure = "Present"
            Name = "src"
            Path = "C:\src"
            FullAccess = @("Domain Admins","Domain Computers")
            ReadAccess = "Authenticated Users"
            DependsOn = "[File]SrcFolder"
        }
        xADCSCertificationAuthority ADCS
        {
            Ensure = 'Present'
            Credential = $DomainCreds
            CAType = 'EnterpriseRootCA'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'              
        }
        WindowsFeature ADCS-Web-Enrollment
        {
            Ensure = 'Present'
            Name = 'ADCS-Web-Enrollment'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'
        }
        xADCSWebEnrollment CertSrv
        {
            Ensure = 'Present'
            Name = 'CertSrv'
            Credential = $DomainCreds
            DependsOn = '[WindowsFeature]ADCS-Web-Enrollment','[xADCSCertificationAuthority]ADCS'
        } 
         
        LocalConfigurationManager 
        {
            DebugMode = $true
            RebootNodeIfNeeded = $true
        }
   }
} 

The .ps1 file contains all the DSC configurations for my environment. The DomainController configuration starts with a list of parameters. These match the ones being passed in by the DSC extension, or have default or calculated values. The import-dscresource command specifies the DSC modules that the configuration needs. I have to ensure that any I am using are included in the zip files downloaded by the extension. I am using modules that configure disks, network shares, active directory domains and certificate services.

The node section then declares my configuration. You can set configurations for multiple hosts in a single DSC configuration block, but I’m only concerned with the host I’m on – localhost. Within the block I then declare what I want the configuration of the host to be. It’s the job of the DSC modules to apply whatever actions are necessary to set the configuration to that which I specify. Just like in our resource template, DSC settings can depend on one another if something needs to be done before something else.

This DSC configuration installs the windows features needed for creating a domain controller. It looks for the additional drive on the VM and assigns it the drive letter F. It creates the new Active Directory domain and places the domain database files on drive F. Once the domain is up and running I create a folder on drive C called src and share that folder. I’m doing that because I create two certificates later and I need to make them available to other machines in the domain. More on that in a bit. Finally, we install the certificate services features and configure a certificate authority. The LocalConfigurationManager settings turn on as much debug output as I can and tell the system that if any of the actions in my config demand a reboot that’s OK – restart as and when required rather than waiting until the end.

I’d love to do all my configuration with DSC but sadly there just aren’t the modules yet. There are some things I just can’t do, like creating a new certificate template in my CA and then generating some specific templates for my ADFS services that are on other VMs. I also can’t set file rights on a folder, although I can set rights on a share. Notice that I grant access to my share to Domain Computers. Both the DSC modules and the custom script extension command are run as the local system account. When I try to read files over the network that means I am connecting to the share as the Computer account and I need to grant access. When I create the DC there are no other VMs in the domain, so I use the Domain Computers group to make sure all my servers will be able to access the files.

Once the DC module completes I have a working domain with a certificate authority.

The Custom Scripts

As with my DSC modules, I keep all the custom scripts for my VMs in one folder within the solution. All of these need to be uploaded to Azure storage so I can access them with the extension and copy them to my VMs. The screenshot below shows the files in the solution. I have a script for each VM that needs one, which is executed by the extension. I then have a file of shared functions and a zip with supporting modules that I need.

custom scripts_thumb[2]

#
# DomainController.ps1
#
param (
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $tsServiceName,
    $resourceLocation
)

$password =  ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN\$vmAdminUsername", $password)
Write-Verbose -Verbose "Entering Domain Controller Script"
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername"
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "tsServiceName: $tsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="

    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("DomainController Script Executed", $info_event, 5001)


Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {

    param (
        $workingDir,
        $vmAdminPassword,
        $fsServiceName,
        $tsServiceName,
        $resourceLocation
    )
    # Working variables
    $serviceAccountOU = "Service Accounts"
    Write-Verbose -Verbose "Entering Domain Controller Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "tsServiceName: $tsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="


    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"
    $info_event = [System.Diagnostics.EventLogEntryType]::Information
    $event.WriteEntry("In DomainController scriptblock", $info_event, 5001)

    #go to our packages scripts folder
    Set-Location $workingDir
    
    $zipfile = $workingDir + "\PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)

    Import-Module .\tuServDeployFunctions.ps1

    #Enable CredSSP in server role for delegated credentials
    Enable-WSManCredSSP -Role Server –Force

    #Create OU for service accounts, computer group; create service accounts
    Add-ADServiceAccounts -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU -password $vmAdminPassword
    Add-ADComputerGroup -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU
    Add-ADComputerGroupMember -group "tuServ Computers" -member ($env:COMPUTERNAME + '$')

    #Create new web server cert template
    $certificateTemplate = ($env:USERDOMAIN + "_WebServer")
    Generate-NewCertificateTemplate -certificateTemplateName $certificateTemplate -certificateSourceTemplateName "WebServer"
    Set-tsCertificateTemplateAcl -certificateTemplate $certificateTemplate -computers "tuServComputers"

    # Generate SSL Certificates

    $fsCertificateSubject = $fsServiceName + "."+($resourceLocation.Replace(" ","")).ToLower()+".cloudapp.azure.com"
    Generate-SSLCertificate -certificateSubject $fsCertificateSubject -certificateTemplate $certificateTemplate
    $tsCertificateSubject = $tsServiceName + ".northeurope.cloudapp.azure.com"
    Generate-SSLCertificate -certificateSubject $tsCertificateSubject -certificateTemplate $certificateTemplate

    # Export Certificates
    $fsCertExportFileName = $fsCertificateSubject+".pfx"
    $fsCertExportFile = $workingDir+"\"+$fsCertExportFileName
    Export-SSLCertificate -certificateSubject $fsCertificateSubject -certificateExportFile $fsCertExportFile -certificatePassword $vmAdminPassword
    $tsCertExportFileName = $tsCertificateSubject+".pfx"
    $tsCertExportFile = $workingDir+"\"+$tsCertExportFileName
    Export-SSLCertificate -certificateSubject $tsCertificateSubject -certificateExportFile $tsCertExportFile -certificatePassword $vmAdminPassword

    #Set permissions on the src folder
    $acl = Get-Acl c:\src
    $acl.SetAccessRuleProtection($True, $True)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain Computers","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Authenticated Users","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    Set-Acl c:\src $acl


    #Create src folder to store shared files and copy certs to it
    Copy-Item -Path "$workingDir\*.pfx" c:\src

} -ArgumentList $PSScriptRoot, $vmAdminPassword, $fsServiceName, $tsServiceName, $resourceLocation

The domain controller script is shown above. There are a whole bunch of write-verbose commands that output debug which I can see through the Azure Resource Explorer as the script runs.

Pretty much the first thing I do here is an invoke-command. The script is running as local system and there’s not much I can actually do as that account. My invoke-command block runs as the domain administrator so I can get stuff done. Worth noting is that the invoke-command approach makes accessing network resources tricky. It’s not an issue here but it bit me with the ADFS and WAP servers.

I unzip the PSPKI archive that has been copied onto the server and load the modules therein. The files are downloaded to a folder that is in a structure including the version number of the script extension so I can’t be explicit. Fortunately I can use the $PSScriptRoot variable to workout that location and I pass it into the invoke-command as $workingDir. The PSKPI modules allow me to create a new certificate template on my CA so I can generate new certs with exportable private keys. I need the same certs on more than one of my servers so I need to be able to copy them around. I generate the certs and drop them into the src folder I created with DSC. I also set the rights on that src folder to grant Domain Computers and Authenticated Users access. The latter is probably overdoing it, since the former should do what I need, but I spent a good deal of time being stymied by this so I’m taking a belt and braces approach.

The key functions called by the script above are shown below. Held in my modules file, these are all focused on certificate functions and pretty much all depend on the PSPKI modules.

function Generate-NewCertificateTemplate
{
    [CmdletBinding()]
    # note can only be run on the server with PSPKI eg the ActiveDirectory domain controller
    param
    (
        $certificateTemplateName,
        $certificateSourceTemplateName        
    )

    Write-Verbose -Verbose "Generating New Certificate Template" 

        Import-Module .\PSPKI\pspki.psm1
        
        $certificateCnName = "CN="+$certificateTemplateName

        $ConfigContext = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext 
        $ADSI = [ADSI]"LDAP://CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext" 

        $NewTempl = $ADSI.Create("pKICertificateTemplate", $certificateCnName) 
        $NewTempl.put("distinguishedName","$certificateCnName,CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext") 

        $NewTempl.put("flags","66113")
        $NewTempl.put("displayName",$certificateTemplateName)
        $NewTempl.put("revision","4")
        $NewTempl.put("pKIDefaultKeySpec","1")
        $NewTempl.SetInfo()

        $NewTempl.put("pKIMaxIssuingDepth","0")
        $NewTempl.put("pKICriticalExtensions","2.5.29.15")
        $NewTempl.put("pKIExtendedKeyUsage","1.3.6.1.5.5.7.3.1")
        $NewTempl.put("pKIDefaultCSPs","2,Microsoft DH SChannel Cryptographic Provider, 1,Microsoft RSA SChannel Cryptographic Provider")
        $NewTempl.put("msPKI-RA-Signature","0")
        $NewTempl.put("msPKI-Enrollment-Flag","0")
        $NewTempl.put("msPKI-Private-Key-Flag","16842768")
        $NewTempl.put("msPKI-Certificate-Name-Flag","1")
        $NewTempl.put("msPKI-Minimal-Key-Size","2048")
        $NewTempl.put("msPKI-Template-Schema-Version","2")
        $NewTempl.put("msPKI-Template-Minor-Revision","2")
        $NewTempl.put("msPKI-Cert-Template-OID","1.3.6.1.4.1.311.21.8.287972.12774745.2574475.3035268.16494477.77.11347877.1740361")
        $NewTempl.put("msPKI-Certificate-Application-Policy","1.3.6.1.5.5.7.3.1")
        $NewTempl.SetInfo()

        $WATempl = $ADSI.psbase.children | where {$_.Name -eq $certificateSourceTemplateName}
        $NewTempl.pKIKeyUsage = $WATempl.pKIKeyUsage
        $NewTempl.pKIExpirationPeriod = $WATempl.pKIExpirationPeriod
        $NewTempl.pKIOverlapPeriod = $WATempl.pKIOverlapPeriod
        $NewTempl.SetInfo()
        
        $certTemplate = Get-CertificateTemplate -Name $certificateTemplateName
        Get-CertificationAuthority | Get-CATemplate | Add-CATemplate -Template $certTemplate | Set-CATemplate
}

function Set-tsCertificateTemplateAcl
{
    [CmdletBinding()]
    param
    (
    $certificateTemplate,
    $computers
    )

    Write-Verbose -Verbose "Setting ACL for cert $certificateTemplate to allow $computers"
    Write-Verbose -Verbose "---"

        Import-Module .\PSPKI\pspki.psm1
        
        Write-Verbose -Verbose "Adding group $computers to acl for cert $certificateTemplate"
        Get-CertificateTemplate -Name $certificateTemplate | Get-CertificateTemplateAcl | Add-CertificateTemplateAcl -User $computers -AccessType Allow -AccessMask Read, Enroll | Set-CertificateTemplateAcl

}

function Generate-SSLCertificate
{
    [CmdletBinding()]
    param
    (
    $certificateSubject,
    $certificateTemplate
    )

    Write-Verbose -Verbose "Creating SSL cert using $certificateTemplate for $certificateSubject"
    Write-Verbose -Verbose "---"
    
    Import-Module .\PSPKI\pspki.psm1

    Write-Verbose -Verbose "Generating Certificate (Single)"
        $certificateSubjectCN = "CN=" + $certificateSubject
        # Version #1
        $powershellCommand = "& {get-certificate -Template " + $certificateTemplate + " -CertStoreLocation Cert:\LocalMachine\My -DnsName " + $certificateSubject + " -SubjectName " + $certificateSubjectCN + " -Url ldap:}"
        Write-Verbose -Verbose $powershellCommand
        $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
        $encodedCommand = [Convert]::ToBase64String($bytes)

        Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand"
}

function Export-SSLCertificate
{
    [CmdletBinding()]
    param
    (
    $certificateSubject,
    $certificateExportFile,
    $certificatePassword
    )

    Write-Verbose -Verbose "Exporting cert $certificateSubject to $certificateExportFile with password $certificatePassword"
    Write-Verbose -Verbose "---"

    Import-Module .\PSPKI\pspki.psm1

    Write-Verbose -Verbose "Exporting Certificate (Single)"
    
        $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
        Get-ChildItem Cert:\LocalMachine\My | where {$_.subject -match $certificateSubject -and $_.Subject -ne $_.Issuer} | Export-PfxCertificate -FilePath $certificateExportFile -Password $password

}

Making sure it’s reusable

One of the things I’m trying to do here is create a collection of reusable configurations. I can take my DC virtual machine config and make it the core of any number of deployments in future. Key stuff like domain names and machine names are always parameterised all the way through template, DSC and scripts. When Azure Stack arrives I should be able to use the same configuration on-prem and in Azure itself and we can use the same building blocks for any number of customer projects, even though it was originally built for an internal project.

There’s stuff I need to do here: I need to pull the vNet template directly into the DC template – there’s no need for it to be separate; I could do with trimming back some of the access rights I grant on the folders and shares that are unnecessary; you’ll also notice that I am configuring CredSSP which was part of my original attempt to sort out file access from within the invoke-command blocks and failed miserably.

A quick round of credits

Whilst most of this work has been myself, bashing my head against the desk for a while, it is built upon code created by other people who need to be referenced:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.

An alternative to setting a build quality on a TFS vNext build

TFS vNext builds do not have a concept of build quality unlike the old XAML based builds. This is an issue for us as we used the changing of the build quality as signal to test a build, or to mark it as released to a client (this was all managed with my TFS Alerts DSL to make sure suitable emails and build retention were used).

So how to get around this problem with vNext?

I have used Tag on builds, set using the same REST API style calls as detailed in my post on Release Management vNext templates. I also use the REST API to set the retention on the build, so I actually now don’t need to manage this via the alerts DSL.

The following script, if used to wrapper the calling of integration tests via TCM, should set the tags and retention on a build


function Get-BuildDetailsByNumber
{
    param
    (
        $tfsUri ,
        $buildNumber,
        $username,
        $password

    )

    $uri = "$($tfsUri)/_apis/build/builds?api-version=2.0&buildnumber=$buildNumber"

    $wc = New-Object System.Net.WebClient
    if ($username -eq $null)
    {
        $wc.UseDefaultCredentials = $true
    } else
    {
        $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)
    }
    write-verbose "Getting ID of $buildNumber from $tfsUri "

    $jsondata = $wc.DownloadString($uri) | ConvertFrom-Json
    $jsondata.value[0]
 
}

function Set-BuildTag
{
    param
    (
        $tfsUri ,
        $buildID,
        $tag,
        $username,
        $password

    )

 
    $wc = New-Object System.Net.WebClient
    $wc.Headers["Content-Type"] = "application/json"
    if ($username -eq $null)
    {
        $wc.UseDefaultCredentials = $true
    } else
    {
        $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)
    }
   
    write-verbose "Setting BuildID $buildID with Tag $tag via $tfsUri "

    $uri = "$($tfsUri)/_apis/build/builds/$($buildID)/tags/$($tag)?api-version=2.0"

    $data = @{value = $tag } | ConvertTo-Json

    $wc.UploadString($uri,"PUT", $data)
   
}

function Set-BuildRetension
{
    param
    (
        $tfsUri ,
        $buildID,
        $keepForever,
        $username,
        $password

    )

 
    $wc = New-Object System.Net.WebClient
    $wc.Headers["Content-Type"] = "application/json"
    if ($username -eq $null)
    {
        $wc.UseDefaultCredentials = $true
    } else
    {
        $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)
    }
   
    write-verbose "Setting BuildID $buildID with retension set to $keepForever via $tfsUri "

    $uri = "$($tfsUri)/_apis/build/builds/$($buildID)?api-version=2.0"
    $data = @{keepForever = $keepForever} | ConvertTo-Json
    $response = $wc.UploadString($uri,"PATCH", $data)
   
}


# Output execution parameters.
$VerbosePreference ='Continue' # equiv to -verbose

$ErrorActionPreference = 'Continue' # this controls if any test failure cause the script to stop

 

$folder = Split-Path -Parent $MyInvocation.MyCommand.Definition

write-verbose "Running $folder\TcmExec.ps1"

 

& "$folder\TcmExec.ps1" -Collection $Collection -Teamproject $Teamproject -PlanId $PlanId  -SuiteId $SuiteId -ConfigId $ConfigId -BuildDirectory $PackageLocation -TestEnvironment $TestEnvironment -SettingsName $SettingsName write-verbose "TCM exited with code '$LASTEXITCODE'"
$newquality = "Test Passed"
$tag = "Deployed to Lab"
$keep = $true
if ($LASTEXITCODE -gt 0 )
{
    $newquality = "Test Failed"
    $tag = "Lab Deployed failed"
    $keep = $false
}
write-verbose "Setting build tag to '$tag' for build $BuildNumber"


$url = "$Collection/$Teamproject"
$jsondata = Get-BuildDetailsByNumber -tfsUri $url -buildNumber $BuildNumber #-username $TestUserUid -password $TestUserPwd
$buildId = $jsondata.id
write-verbose "The build $BuildNumber has ID of $buildId"
 
write-verbose "The build tag set to '$tag' and retention set to '$key'"
Set-BuildTag -tfsUri $url  -buildID $buildId -tag $tag #-username $TestUserUid -password $TestUserPwd
Set-BuildRetension -tfsUri $url  -buildID $buildId  -keepForever $keep #-username $TestUserUid -password $TestUserPwd

# now fail the stage after we have sorted the logging
if ($LASTEXITCODE -gt 0 )
{
    Write-error "Test have failed"
}

If all the tests pass we see the Tag being added and the retention being set, if they fail just a tag should be set

image

$ErrorActionPreference = 'Continue'