BM-Bloggers

The blogs of Black Marble staff

Exchange 2013 Cert Change - Unable to Support the STARTTLS SMTP Verb

I saw a issue recently on an Exchange server after the certificate used to secure SMTP and IIS services was changed as the old certificate was about to expire.

The original default certificate that is self-generated had been replaced with one from a certificate authority. This had been used for several years without issue. The Exchange server is configured in hybrid mode, with incoming e-mail routed through Microsoft’s Exchange Online Protection (EOP) and TLS is configured to be required.

The actions taken were:

  1. Add the new certificate to the certificate store on the Exchange server. This made the new certificate available within the Exchange Admin Center on the server.
  2. Modify the services assigned to the new certificate to bind SMTP and IIS to the new certificate.
  3. Remove the original certificate from the server.
  4. Restart the Microsoft Exchange Transport Service on the server.

At this point, an error was thrown in the Application Event Log on the server and incoming mail from Exchange Online Protection stopped flowing. The error thrown was:

Log Name:      Application
Source:        MSExchangeFrontEndTransport
Date:          04/02/2016 12:17:20
Event ID:      12014
Task Category: TransportService
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      <Exchange Server FQDN>
Description:
Microsoft Exchange could not find a certificate that contains the domain name <I><Cert Issuer Details><S><Cert Subject Details> in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector <Receive Connector Name> with a FQDN parameter of <I><Cert Issuer Details><S><Cert Subject Details>. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key.

I ran the suggested command to ensure that the Exchange Transport Service had access to the certificate key, but this didn’t help.

Restoring the soon-to-expire certificate to the certificate store on the server and restarting the Microsoft Exchange Transport Service fixed the error, however the certificate in question was going to expire soon, and the use of expired certificates for TLS to EOP is no longer allowed, so this didn’t really help much.

While digging into the configuration for the receive connector specified in the error thrown, I noticed something interesting. Despite the new certificate being supplied by the same certificate authority as the old one, the issuer specified for the certificate had changed slightly. The subject information was still the same. Sure enough, the properties of the receive connector in question still showed the old certificate details even through Exchange had been configured with the new certificate for SMTP and IIS. The information on the receive connector can be found by issuing the following command:

Get-ReceiveConnector "<Receive Connector Name>" | fl

The property we’re interested in is TlsCertificateName.

To correct the error, the following steps were taken:

  1. Locate the issuer and subject information from the new certificate. This can be done by examining the certificate directly via the certificate store, or using PowerShell, e.g.
    $certs = Get-ExchangeCertificate
    Locate the certificate you want to use. The one we wanted was the first on the list.
  2. Assemble the new issuer and subject information in a suitable format for the Receive Connector configuration. Again this can be done by copying the required text from the certificate information, or using PowerShell, e.g.:
    $certinfo = “<I>” + $certs[0].issuer + “<S>” + $certs[0].subject
  3. Modify the Receive Connector configuration to include the new certificate information assembled above, e.g.:
    Set-ReceiveConnector “<Receive Connector Name>” –TlsCertificateName $certinfo
  4. Restart the Microsoft Exchange Transport Service.
  5. Remove the old certificate from the server.

A quick test of incoming and outgoing mail indicated that everything was flowing as expected.

A vNext build task and PowerShell script to generate release notes as part of TFS vNext build.

Updated 22 Mar 2016: This task is now available as an extension in the VSTS marketplace

A common request I get from clients is how can I create a custom set of release notes for a build? The standard TFS build report often includes the information required (work items and changesets/commits associate with the build) but not in a format that is easy to redistribute. So I decided to create a set to tools to try to help.

The tools are available on my github account in two forms:

Both generate a markdown release notes file based on a template passed into the tool. The output report being something like the following:

Release notes for build SampleSolution.Master

Build Number: 20160229.3
Build started: 29/02/16 15:47:58
Source Branch: refs/heads/master

Associated work items
  • Task 60 [Assigned by: Bill <TYPHOONTFS\Bill>] Design WP8 client
Associated change sets/commits
  • ID bf9be94e61f71f87cb068353f58e860b982a2b4b Added a template
  • ID 8c3f8f9817606e48f37f8e6d25b5a212230d7a86 Start of the project

The Template

The use of a template allows the user to define the layout and fields shown in the release notes document. It is basically a markdown file with tags to denote the fields (the properties on the JSON response objects returned from the VSTS REST API) to be replaced when the tool generates the report file.

The only real change from standard markdown is the use of the @@TAG@@ blocks to denote areas that should be looped over i.e: the points where we get the details of all the work items and commits associated with the build.

#Release notes for build $defname  
**Build Number**  : $($build.buildnumber)   
**Build started** : $("{0:dd/MM/yy HH:mm:ss}" -f [datetime]$build.startTime)    
**Source Branch** : $($build.sourceBranch) 
###Associated work items 
@@WILOOP@@ 
* **$($widetail.fields.'System.WorkItemType') $($widetail.id)** [Assigned by: $($widetail.fields.'System.AssignedTo')] $($widetail.fields.'System.Title') 
@@WILOOP@@ 
###Associated change sets/commits 
@@CSLOOP@@ 
* **ID $($csdetail.changesetid)$($csdetail.commitid)** $($csdetail.comment)   
@@CSLOOP@@  

Note 1: We can return the builds startTime and/or finishTime, remember if you are running the template within an automated build the build by definition has not finished so the finishTime property is empty to can’t be parsed. This does not stop the generation of the release notes, but an error is logged in the build logs.

Note 2: We have some special handling in the @@CSLOOP@@ section, we include both the changesetid and the commitid values, only one of there will contain a value, the other is blank. Thus allowing the template to work for both GIT and TFVC builds.

What is done behind the scenes is that each line of the template is evaluated as a line of PowerShell in turn, the in memory versions of the objects are used to provide the runtime values. The available objects to get data from at runtime are

  • $build – the build details returned by the REST call Get Build Details
  • $workItems – the list of work items associated with the build returned by the REST call Build Work Items
  • $widetail – the details of a given work item inside the loop returned by the REST call Get Work Item
  • $changesets – the list of changeset/commit associated with the build build returned by the REST call Build Changes
  • $csdetail – the details of a given changeset/commit inside the loop by the REST call to Changes or Commit depending on whether it is a GIT or TFVC based build

There is a templatedump.md file that just dumps out all the available fields in the PowerShell repo to help you find all the available options

Differences between the script and the task

The main difference between the PowerShell script and the build task is the way the connection is made to the REST API. Within the build task we pickup the access token from the build agent’s context. For the PowerShell script we need to pass credentials in some form or the other, either via parameters or using the default Windows credentials.

Usage

PowerShell

The script can be used in a number of ways

To generate a report for a specific build on VSTS

 .\Create-ReleaseNotes.ps1 -collectionUrl https://yoursite.visualstudio.com/defaultcollection -teamproject "Scrum Project" –defname "BuildTest" -outputfile "releasenotes.md" -templatefile "template.md" -buildnumber "yourbuildnum" -password yourpersonalaccesstoken

Or for the last successful build just leave out the buildnumber

 .\Create-ReleaseNotes.ps1 -collectionUrl https://yoursite.visualstudio.com/defaultcollection -teamproject "Scrum Project" –defname "BuildTest" -outputfile "releasenotes.md" -templatefile "template.md" -password yourpersonalaccesstoken

Authentication options

  1. VSTS with a personal access token – just provide the token using the password parameter
  2. If you are using VSTS and want to use alternate credentials just pass a username and password
  3. If your are using the script with an on-premises TFS just leave off both the username and password and the Windows default credentials will be used.

In all cases the debug output is something like the following


VERBOSE: Getting details of build [BuildTest] from server [https://yoursite.visualstudio.com/defaultcollection/Scrum Project]
VERBOSE: Getting build number [20160228.2]
VERBOSE:    Get details of workitem 504
VERBOSE:    Get details of changeset/commit ba7e613388c06b8440c9e7601a8d6fa29d588051
VERBOSE:    Get details of changeset/commit 52570b2abb80b61a4a629dfd31c0ce071c487709
VERBOSE: Writing output file  for build [BuildTest] [20160228.2].

You should expect to get a report like the example shown at the start of this post.

Build Task

The build task needs to be built and uploaded as per the standard process detailed on my vNext Build’s Wiki (am considering creating a build extensions package to make this easier, keep an eye on this blog)

Once the tool is upload to your TFS or VSTS server it can be added to a build process

image

The task takes two parameters

  • The output file name which defaults to $(Build.ArtifactStagingDirectory)\releasenotes.md
  • The template file name, which should point to a file in source control.

There is no need to pass credentials, this is done automatically

When run you should expect to see a build logs as below and a releases notes file in your drops location.

image

Summary

So I hope some people find these tools useful in generating release notes, let me know if they help and how they could be improved.

My Resource Templates from demos are now on GitHub

I’ve had a number of people ask me if I can share the templates I use in my Resource Template sessions at conferences. It’s taken me a while to find the time, but I have created a repo on GitHub and there is a new Visual Studio solution and deployment project with my code.

One very nice feature that this has enabled me to provide is the same ‘Deploy to Azure’ button as you’ll find in the Azure Quickstart Templates. This meant a few changes to the templates – it turns out that Github is case sensitive for file requests, for example, whilst Azure Storage isn’t. The end result is that you can try out my templates in your own subscription directly from Github!

Build Invites

Today I started sending out emails inviting people to join in with Build Bites 2016. I'm hoping to make this year even bigger and better than last!!!

Using MSDeploy to deploy to nested virtual applications in Azure Web Apps

Azure provides many ways to scale and structure web site and virtual applications. I recently needed to deploy the following structure where each service endpoint was its own Visual Studio Web Application Project built as a MSDeploy Package

  • http://demo.azurewebsites.net/api/service1
  • http://demo.azurewebsites.net/api/service2
  • http://demo.azurewebsites.net/api/service3

To do this in the Azure Portal in …

  1. Created a Web App for the site http://demo.azurewebsites.net This pointed to the disk location site\wwwoot, I disabled the folder as an application as there is not application running at this level 
  2. Created a virtual directory api point to \site\wwroot\api, again disabling this folder as an application 
  3. Created a virtual application for each of my services, each with their own folder

image

I knew from past experience I could use MSDeploy to deploy to the root site or the api virtual directory. However I found when I tried to deploy to any of the service virtual applications I got an error that the web site could not be created. Now I would not expect MSDEPLOY to create a directory so I knew something was wrong at the Azure end.

The fix in the end was simple, it seems the folder service folders e.g \site\wwwroot\api\service1 had not been created by the Azure Portal when I created the virtual directory. I FTP’d onto the web application and create the folder \site\wwwroot\api\service1  once this was done MSDEPlOY worked perfectly, and I could build the structure I wanted. 

Running Pester PowerShell tests in the VSTS hosted build service

Updated 22 Mar 2016 This task is available in the VSTS Marketplace

If you are using Pester to unit test your PowerShell code then there is a good chance you will want to include it in your automated build process. To do this, you need to get Pester installed on your build machine. The usual options would be

If you own the build agent VM then any of these options are good, you can even write the NuGet restore into your build process itself. However there is a problem, both the first two options need administrative access as they put the Pester module in the $PSModules folder (under ‘Program Files’); so these can’t be used on VSTS’s hosted build system, where your are not an administrator

So this means you are left with copying the module (and associated functions folder) to some local working folder and running it manually; but do you really want to have to store the Pester module in your source repo?

My solution was to write a vNext build tasks to deploy the Pester files and run the Pester tests.

image_thumb[12]

The task takes two parameters

  • The root folder to look for test scripts with the naming convention  *.tests.ps1. Defaults to $(Build.SourcesDirectory)\*
  • The results file name, defaults to $(Build.SourcesDirectory)\Test-Pester.XML

The Pester task does not in itself upload the test results, it just throws and error if tests fails. It relies on the standard test results upload task. Add this task and set

  • it to look for nUnit format files
  • it already defaults to the correct file name pattern.
  • IMPORTANT: As the Pester task will stop the build on an error you need to set the ‘Always run’ to make sure the results are published.

image_thumb[11]

Once all this is added to your build you can see your Pester test results in the build summary

image_thumb[10]

image_thumb[14]

You can find the task in my vNextBuild repo

A vNext build task to get artifacts from a different TFS server

With the advent of TFS 2015.2 RC (and the associated VSTS release) we have seen the short term removal of the ‘External TFS Build’ option for the Release Management artifacts source. This causes me a bit of a problem as I wanted to try out the new on premises vNext based Release Management features on 2015.2, but don’t want to place the RC on my production server (though there is go live support). Also the ability to get artifacts from an on premises TFS instance when using VSTS open up a number of scenarios, something I know some of my clients had been investigating.

To get around this blocker I have written a vNext build task that does the getting of a build artifact from the UNC drop. It supports both XAML and vNext builds. Thus replacing the built in artifact linking features.

Usage

To use the new task

  • Get the task from my vNextBuild repo (build using the instructions on the repo’s wiki) and install it on your TFS 2015.2 instance (also use the notes on the repo’s wiki).
  • In your build, disable the auto getting of the artifacts for the environment (though in some scenarios you might choose to use both the built in linking and my custom task)

image

  • Add the new task to your environment’s release process, the parameters are
    • TFS Uri – the Uri of the TFS server inc. The TPC name
    • Team Project – the project containing the source build
    • Build Definition name – name of the build (can be XAML or vNext)
    • Artifact name – the name of the build artifact (seems to be ‘drop’ if a XAML build)
    • Build Number – default is to get the latest successful completed build, but you can pass a specific build number
    • Username/Password – if you don’t want to use default credentials (the user the build agent is running as), these are the ones used. These are passed as ‘basic auth’ so can be used against an on prem TFS (if basic auth is enabled in IIS)  or VSTS (with alternate credentials enabled).

image

 

When the task runs it should drop artifacts in the same location as the standard mechanism, so can be picked up by any other tasks on the release pipeline using a path similar to $(System.DefaultWorkingDirectory)\SABS.Master.CI\drop

Limitations

The task in its current form does not provide any linking of artifacts to the build reports, or allow the selection of build versions when the release is created. This removing audit trail features.

However, it does provide a means to get a pair of TFS servers working together, so can certainly enable some R&D scenarios while we await 2015.2 to RTM and/or the ‘official’ linking of External TFS builds as artifacts

Azure - A really Useful Seminar

 

Last week I had the delight of helping present a really useful seminar at the University of Hull alongside Peter Roberts. Here is a similar blogpost he has written where he also walks through how you can use azure and a MySQL database to store high scores of a video game.

Overall the presentation was a success with a fair few people turning up and seemingly enjoying and looking forward to perusing azure in their own ventures

We were also asked by Rob Miles to run a cloud workshop later in the semester which we are now looking into :D

 

Running a SaaS service at scale

Brian Harry has done a couple of very interesting posts (post 1 and post 2) on the recent outages of the VSTS service. Whether you use VSTS or not they make interesting reading for anyone who is involved in running SaaS based systems, or anything at scale.

From the posts the obvious reading is you cannot under estimate the importance of

  • in production montoring
  • having an response plan
  • doing a proper root cause analysis
  • and putting steps in place to stop the problem happening again

Well worth a read

Repost: What I learnt extending my VSTS Release Process to on-premises Lab Management Network Isolated Environments

This a a repost of a guest article first posted on the Microsoft UK Developers Blog: How to extend a VSTS release process to on-premises

Note that since I write the original post there have been some changes on VSTS and the release to TFS 2015.2 RC1. These mean there is no longer an option to pull build artifacts from the an external TFS server as part of a release; so invalidating some of the options this post discusses. I have struck out the outdated sections. The rest of the post is still valid, especially the section on where to update configuration settings. The release of TFS 2015.2 RC1 actually makes many of options easier as you don’t have to bridge between on premises TFS and VSTS as both build and release features are on the same server.


 

Background

Visual Studio Team Services (VSTS) provides a completely new version of Release Management, replacing the version shipped with TFS 2013/2015. This new system is based on the same cross platform agent model as the new vNext build system shipped with TFS 2015 (and also available on VSTS). At present this new Release Management system is only available on VSTS, but the features timeline suggest we should see it on-premises in the upcoming update 2015.2.

You might immediately think that as this feature is only available in VSTS at present, that you cannot use this new release management system with on-premises services, but this would not be true. The Release Management team have provided an excellent blog post on running an agent connected to your VSTS instance inside your on-premises network to enable hybrid scenarios.

This works well for deploying to domain connected targets, especially if you are using Azure Active Directory Sync to sync your corporate domain and AAD to provide a directory backed VSTS instance. In this case you can use a single corporate domain account to connect to VSTS and to the domain services you wish to deploy to from the on-premises agent.

However, I make extensive use of TFS Lab Management to provide isolated dev/test environments (linked to an on-premises TFS 2015.1 instance). If I want to deploy to these VMs it adds complexity in how I need to manage authentication; as I don’t want to have to place a VSTS build agent in each transiently created dev/test lab. One because it is complex and two because there is a cost to having more than one self provisioned vNext build agent.

It is fair to say that deploying to an on-premises Lab Management environment from a VSTS instance is an edge case, but the same basic process will be needed when the new Release Management features become available on-premises.

Now, I would be the first to say that there is a good case to look at a move away from Lab Management to using Azure Dev Labs which are currently in preview, but Dev Labs needs fuller Azure Resource Manager support before we can replicate the network isolated Lab Management environments I need.

The Example

So at this time, I still need to be able to use the new Release Management with my current Lab Management network isolated labs, but this raises some issues of authentication and just what is running where. So let us work through an example; say I want to deploy a SQL DB via a DACPAC and a web site via MSDeploy on the infrastructure shown below.

 

image

Both the target SQL and Web servers live inside the Lab Management isolated network on the proj.local domain, but have DHCP assigned addresses on the corporate LAN in the form vslm-[guid].corp.com (managed by Lab Management), so I can access them from the build agent with appropriate credentials (a login for the proj.local domain within the network isolated lab).

The first step is to install a VSTS build agent linked to my VSTS instance, once this is done we can start to create our release pipeline. The first stage is to get the artifacts we need to deploy i.e. the output of builds. These could be XAML or vNext build on the VSTS instance, or from the on-premises TFS instance or a Jenkins build. Remember a single release can deploy any number of artifacts (builds) e.g. the output of a number of builds. It is this fact that makes this setup not as strange as it initially appears. We are just using VSTS Release Management to orchestrate a deployment to on-premises systems.

The problem we have is that though our release now has artifacts, we now need to run some commands on the VM running the vNext Build Agent to do the actual deployment. VSTS provides a number of deployment tasks to help in this area. Unfortunately, at the time of writing, the list of deployment tasks in VSTS are somewhat Azure focused, so not that much use to me.

clip_image004

This will change over time as more tasks get released, you can see what is being developed on the VSO Agent Task GitHub Repo (and of course you could install versions from this repo if you wish).

So for now I need to use my own scripts, as we are on a Windows based system (not Linux or Mac) this means some PowerShell scripts.

The next choice becomes ‘do I run the script on the Build Agent VM or remotely on the target VM’ (within the network isolated environment). The answer is the age-old consultants answer ‘it depends’. In the case of both DACPAC and MSDeploy deployments, there is the option to do remote deployment i.e. run the deployment command on the Build Agent VM and it remotely connects to the target VMs in the network isolated environment. The problem with this way of working is that I would need to open more ports on the SQL and Web VMs to allow the remote connections; I did not want to do this.

The alternative is to use PowerShell remoting, in this model I trigger the script on the Build Agent VM, but it uses PowerShell remoting to run the command on the target VM. For this I only need to enable remote PowerShell on the target VMs, this is done by running the following command and follow prompts on each target VM to set up the required services and open the correct ports on the target VMs firewall.

winrm -qc 

This is something we are starting to do as standard to allow remote management via PowerShell on all our VMs.

So at this point it all seems fairly straight forward, run a couple of remote PowerShell scripts and all is good, but no. There is a problem.

A key feature of Release Management is that you can provide different configurations for different environments e.g. the DB connection string is different for the QA lab as opposed to production. These values are stored securely in Release Management and applied as needed.

clip_image006

The way these variables are presented is as environment variables on the Build Agent VM, hence they can accessed from PowerShell in the form env:$__DOMAIN__. IT IS IMPORTANT TO REMEMBER that they are not presented on any target VMs in the isolated lab network environment, or to these VMs via PowerShell remoting.

So if we are intending to use remote PowerShell execution for our deployments we can’t just access settings environment variables as part of the scripts being run remotely; we would have to pass the environment variable in as PowerShell command line arguments.

This works OK for the DACPAC deployment as we only need to pass in a few, fixed arguments e.g. The PowerShell script arguments when passing the arguments for the package name, target server and DB name using the Release Management variables in their $(variable) form become:

-DBPackage $(DBPACKAGE) -TarhegDBName $(TARGETDDBNAME) –TargetServer $(TARGETSERVERNAME)

However, for the MSDeploy deploy there is no simple fixed list of parameters. This is because as well as parameters like package names, we need to modify the setparameters.xml file at deployment time to inject values for our web.config from the release management system.

The solution I have adopted is do not try to pass this potentially long list of arguments into a script to be run remotely, the command line argument just becomes hard to edit without making errors, and needs to be updated each time we add an extra variable.

The alternative is to update the setparameters.xml file on the Build Agent VM before we attempt to run it remotely. To this end I have written a custom build task to handle the process which can found on my GitHub repo. This updates a named setparameters.xml file using token replacement based on environment variables set by Release Management. If you would rather automatically find a number of setparmeters.xml files using wildcards (because you are deploying many sites/services) and update them all with a single set of tokens, have a look at Colin Dembovsky’s build task which does just that.

So given this technique my release steps become:

1. Get the artifacts from the builds to the Build Agent VM.

2. Update the setparameters.xml file using environment variables on the Build Agent VM.

3. Copy the downloaded (and modified) artifacts to all the target machines in the environment.

4. On the SQL VM run the sqlpackage.exe command to deploy the DACPAC using remote PowerShell execution.

5. On the Web VM run the MSDeploy command using remote PowerShell execution.

clip_image008

The PowerShell I run in the final two tasks are just simple wrappers around the underlying commands. The key fact is that because they are scripts it allows remote execution. The targeting of the execution is done by associating each task with a target machine group, and filtering either by name or in my case role, to target specific VMs.

clip_image010

In my machine group I have defined both my SQL and Web VMs using the names on the corporate LAN. Assigning a role to each to make targeting easier. Note that it is here, in the machine group definition, that I provide the credentials required to access the VMs in my Network Isolated environment i.e. a proj.local set of credentials.

clip_image012.

Once I get all these settings in place I am able to build a product on my VSTS build system (or my on-premises TFS instance) and using this VSTS connected, but on-premises located; Build Agent deploy my DB and web site to a Lab Management network isolated test environment.

There is no reason why I cannot add more tasks to this release pipeline to perform more actions such as run tests (remember the network isolated environment already has TFS Test Agents installed, but they are pointing to the on-premises TFS instance) or to deploy to other environments.

Summary

As I said before, this is an edge case, but I hope it shows how flexible the new build and release systems can be for both TFS and VSTS.