BM-Bloggers

The blogs of Black Marble staff

Listing all the PBIs that have no acceptance criteria

Update 24 Aug 2014:  Changed the PowerShell to use a pipe based filter as opposed to nested foreach loops

The TFS Scrum process template’s Product Backlog Item work item type has an acceptance criteria field. It is good practice to make sure any PBI has this field completed; however it is not always possible to enter this content when the work item is initially create i.e. before it is approved. We oftan find we add a PBI that is basically a title and add the summary and acceptance criteria as the product is planned.

It would be really nice to have a TFS work item query that listed all the PBIs that did not have the acceptance criteria field complete. Unfortunately there is not way to check a rich text or html field is empty in TFS queries It has been requested via UserVoice, but there is no sign of it appearing in the near future.

So we are left the TFS API to save the day, the following PowerShell function does the job, returning a list of non-completed PBI work items that have empty Acceptance Criteria.

 

# Load the one we have to find, might be more than we truly need for this single function
# but I usually keep all these functions in a single module so share the references
$ReferenceDllLocation = "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0\"
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.Client.dll" -ErrorAction Stop -Verbose
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.Common.dll" -ErrorAction Stop -Verbose
Add-Type -Path $ReferenceDllLocation"Microsoft.TeamFoundation.WorkItemTracking.Client.dll"  -ErrorAction Stop –Verbose


 

function Get-TfsPBIWIthNoAcceptanceCriteria { <# .SYNOPSIS This function get the list of PBI work items that have no acceptance criteria .DESCRIPTION This function allows a check to be made that all PBIs have a set of acceptance criteria .PARAMETER CollectionUri TFS Collection URI .PARAMETER TeamProject Team Project Name .EXAMPLE Get-TfsPBIWIthNoAcceptanceCriteria -CollectionUri "http://server1:8080/tfs/defaultcollection" -TeamProject "My Project" #> Param ( [Parameter(Mandatory=$true)] [uri] $CollectionUri , [Parameter(Mandatory=$true)] [string] $TeamProject ) # get the source TPC $teamProjectCollection = New-Object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection($CollectionUri) try { $teamProjectCollection.EnsureAuthenticated() } catch { Write-Error "Error occurred trying to connect to project collection: $_ " exit 1 } #Get the work item store $wiService = $teamProjectCollection.GetService([Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore]) # find each work item, we can't check for acceptance crieria state in the query $pbi = $wiService.Query("SELECT [System.Id] FROM WorkItems WHERE [System.TeamProject] = '{0}' AND [System.WorkItemType] = 'Product Backlog Item' AND [System.State] <> 'Done' ORDER BY [System.Id]" -f $teamproject) $pbi |  where-Object { $_.fields | where-object {$_.ReferenceName -eq 'Microsoft.VSTS.Common.AcceptanceCriteria' -and $_.Value -eq ""}} # Using a single piped line to filter the wi # this is equivalent to the following nested loops for those who like a more winded structure # $results = @() # foreach ($wi in $pbi) # { # foreach ($field in $wi.Fields) # { # if ($field.ReferenceName -eq 'Microsoft.VSTS.Common.AcceptanceCriteria' -and $field.Value -eq "") # { # $results += $wi # } # } # } # $results }

Why is my TFS report not failing when I really think it should ?

Whilst creating some custom reports for a client we hit a problem that though the reports worked on my development system and their old TFS server it failed on their new one. The error being that the Microsoft_VSTS_Scheduling_CompletedWork was an invalid column name

image

Initially I suspected the problem was a warehouse reprocessing issue, but other reports worked so it could not have been that.

It must really be the column is missing, and that sort of makes sense. On the new server the team was using the Scrum process template, the Microsoft_VSTS_Scheduling_CompletedWork  and Microsoft_VSTS_Scheduling_OriginalEstimate fields are not included in this template, the plan had been to add them to allow some analysis of estimate accuracy. This had been done on my development system, but not on the client new server. Once these fields were added to the Task work item the report leapt into life.

The question is then, why did this work on the old TFS server? The team project on the old server being used to test the reports also did not have the customisation either. However, remember the OLAP cube for the TFS warehouse is shared between ALL team projects on a server, so as one of these other team projects was using the MSF Agile template the fields are present, hence the report worked.

Remember that shared OLAP cube, it can trip you up over and over again

Configuring SharePoint 2013 Apps and Multiple Web Applications on SSL with a Single IP Address

Background

Traditionally the approach to multiple SSL IIS websites hosted on a server involved multiple sites each with its own certificate bound to a single IP address/port combination. If you didn’t mind using non standard SSL ports, then you could use a single IP address on the server, but the experience was not necessarily pleasant for the end user. Assuming you wanted to use the standard SSL port (443), the servers in the farm could potentially consume large numbers of IP addresses, especially is using large numbers of websites and large numbers of web front end servers. This approach also carried over to SharePoint where multiple SSL web applications were to be provisioned.

Using a wildcard SSL certificate allows multiple SSL web applications (IIS websites) to be bound to the same IP address/port combination as long as host headers are used to allow IIS to separate the traffic as it arrives. This could be achieved because IIS uses the same certificate to decrypt traffic no matter what the URL that is being requested (assuming they all conform to the domain named in the wildcard certificate) and the host header then allows IIS to route the traffic appropriately.

With the introduction of SharePoint 2013 apps however, there is a requirement for the use of at least two different SSL certificates on the same server; one (in the case of a wildcard, or more if using individual certificates) for the content web applications and a second for the SharePoint app domain that is to be used (the certificate for the apps domain must be a wildcard certificate). The current recommended configuration is to use a separate apps domain (for example if the main wildcard certificate references *.domain.com, the apps domain should be something along the lines of *.domain-apps.com rather than a subdomain of the main domain as a subdomain could lead to cookie attached on other non-SharePoint web based applications in the same domain space).

For some organisations, the proliferation of IP addresses for the traditional approach to SSL is not an issue. For some organisations however, either the number of IP addresses that they have available is limited, or they wish to reduce the administration overhead involved in the use of multiple IP addresses servers hosting SharePoint. Other scenarios also encourage the use of a single IP address on a server, for example the use of Hyper-V replication, where the system can handle the reassignment of the primary IP address of the server on failover, but additional IP addresses require that some automation be put in place to configure the additional IP addresses upon failover.

Note: The following is not a step-by-step set of instructions for configuring apps; there are a number of good blog posts (e.g. http://sharepointchick.com/archive/2012/07/29/setting-up-your-app-domain-for-sharepoint-2013.aspx) and of course the TechNet documentation at http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx to lead you through the required steps. This post also borrows heavily from ‘How To Configure SharePoint 2013 On-Premises Deployments for Apps’ by Chris Whitehead – read that article for more in-depth discussion of the configuration required for SharePoint Apps.

SharePoint Apps Requirements

To configure Apps for SharePoint 2013 using a separate domain (rather than a subdomain) for apps, the following requirements must be met:

  • An App domain needs to be determined. If our main domain is ‘contoso.com’, our apps domain could be ‘contosoapps.com’ for example. If SharePoint is available outside the corporate network and apps will be used, the external domain will need to be purchased.
  • An Apps domain DNS zone and wildcard CNAME entry.
  • An Apps domain wildcard certificate.
  • An App Management Service Application and a Subscription Settings Service Application created, configured and running. Note that both of these Service Applications should be in the same proxy group.
  • App settings should be configured in SharePoint.
  • A ‘Listener’ web application with no host header to receive apps traffic.

It is also assumed the the following are already in place:

  • A functional SharePoint 2013 farm.
  • At least one functional content web application configured to use SSL and host header(s).

Infrastructure Configuration

Each App instance is self-contained with a unique URL in order to enforce isolation and prevent cross domain JavaScript calls through the same-origin policy in Web browsers. The format the the App URL is:

App URL

The App domain to be used should be determined based on domains already in use.

Instructions for creating a new DNS zone, the wildcard DNS CNAME entry and a wildcard certificate can be found at http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx. As we’re planning to use a single IP address for all web applications and Apps, point the CNAME wildcard entry at either the load balanced IP address (VIP) in use for the content web applications, or the IP address of the SharePoint server (if you’ve only got one).

Farm Configuration

To be able to use Apps within SharePoint 2013, both the App Management Service Application and the Subscription Settings Service Application need to be created, configured and running and the App prefix and URL need to be configured. Instructions for getting these two Service Applications configured and running are again available at http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx.

In addition to the Service Application configuration, a ‘Listener’ web application with no host header is required to allow traffic for SharePoint Apps to be routed correctly. Without the ‘Listener’ web application with no host header, assuming all other web applications in the farm are configured to use host headers we have the following scenario:

SP 2013 Farm with Apps - No Listener Web App

[diagram from ‘How To Configure SharePoint 2013 On-Premises Deployments for Apps’]

In the above diagram, the client request for the SharePoint App URL DNS lookup is performed which points to the NLB address for the content web applications and traffic is therefore directed to the farm. The host header requests do not however match any of the web applications configured on the farm so SharePoint doesn’t know how to deal with the request.

We could try configuring SharePoint and IIS so that SharePoint App requests are sent to one of the existing web applications, however when using SSL we cannot bind more than one certificate to the same IIS web site and we cannot have an SSL certificate containing multiple domain wildcards. With non-SSL web applications, SharePoint could, in theory, do some clever routing by using the App Management Service Application to work out which web application hosts the SharePoint App web if one of the existing web applications were configured with no host header (I see another set of experiments on the horizon…).

To get around this issue with SSL traffic, a ‘Listener’ web application needs to be created. This web application should have no host header and therefore acts as a catchall for traffic not matching any of the other host headers configured. Note that if you already have a web application without a host header in SharePoint, you’ll have to recreate it with a host header before SharePoint will allow you to create another one. This results in the following scenario:

SP 2013 Farm with Apps - Listner App Config

[diagram from ‘How To Configure SharePoint 2013 On-Premises Deployments for Apps’]

The client request for the SharePoint App URL DNS lookup is performed which points to the NLB address for the content web applications and traffic is therefore directed to the farm. This time however, there is a ‘Listener’ web application that receives all traffic not bound for the main content web applications and internally the SharePoint HTTP module knows where to direct this traffic by using the App Management Service Application to work out where the SharePoint App web is located.

Note: The account used for the application pool for the ‘Listener’ web application must have rights to all the content databases for all of the web applications to which SharePoint Apps traffic will be routed. You could use the same account/application pool for all content web applications, but I’d recommend granting the rights per web applications as required using the ‘SPWebApplication.GrantAccessToProcessIdentity’ method instead.

As we need to use a different certificate on this ‘Listener’ web application, it used to be the case that it would have to be on its own IP address, however with Windows Server 2012 and 2012 R2, a new feature, Server Name Identity (SNI), was introduced that allows us to get around this limitation. To configure the above scenario using a single server IP address, the following steps need to be completed (note that in my scenario, I’ve deleted the default IIS web site; if it is only bound to port 80, then it should not need to be deleted):

  1. Open IIS Manager on the first of the WFE servers.
  2. Select the first of the content web applications and click ‘Bindings…’ in the actions panel at the right of the screen.
  3. Select the HTTPS binding and click ‘Edit…’
  4. Ensure that the ‘Host name’ field is filled in with the host header and that the ‘Require Server Name Indication’ checkbox is selected.
  5. Ensure that the correct SSL certificate for the URL in the ‘Host name’ field is selected.
  6. Ensure that ‘All Unassigned’ is selected in the ‘IP address’ field.
  7. Click OK to close the binding dialog and close the site bindings dialog.
  8. Repeat the above steps for all of the other content web applications with the exception of the ‘Listener’ web app.
  9. Ensure that the bindings for the ‘Listener’ web application do not have a host header. You will not be able to save the binding details for this web application if ‘Require Server Name Indication’ is selected, so leave it unselected for this web application. Select the Apps domain certificate for this web application.
  10. Start any required IIS SharePoint web applications that are stopped.
  11. Repeat the above steps for all of the other WFE servers.

The result of the steps above is that all of the content web applications with the exception of the ‘Listener’ web application should have a host header, be listening on port 443 on the ‘all unassigned’ IP address, have ‘Require SNI’ selected and have an appropriate certificate bound to the web application. The ‘Listener’ web application should have neither a host header, nor have ‘Require SNI’ selected, be listening on port 443 on the ‘all unassigned’ IP address and have the Apps domain certificate bound to it. This configuration allows:

  • Two wildcard certificates to be used, one for all of the content web applications, the other for the Apps domain bound to the ‘Listener’ web application with all applications listening for traffic on the same IP address/port combination, or
  • Multiple certificates to be bound, one per content web application, plus the Apps domain wildcard to be bound to the ‘Listener’ web application with all applications listening for traffic on the same IP address/port combination, or
  • Some combination of the above.

There are some limitations to using SNI, namely that a few browsers don’t support the feature. At the time of writing, IE on Windows XP (but then, you’re not using Windows XP, are you?) and the Android 2.x default browser don’t seem to support it, as don’t a few more esoteric browsers and libraries. All of the up-to-date browsers seem to work happily with it.

Where have my freeview tuners gone?

I have been a long time happy user of Windows Media Center since it’s XP days. My current systems is Windows 8.1 an ATOM based Acer Revo with a pair of  USB PCTV Nanostick T2 Freeview HD tuners. For media storage I used a USB attached StarTech RAID disk sub system. This has been working well for a good couple of years, sitting in a cupboard under that stairs. However, I am about to move house and all the kit is going to have to go under the TV. The Revo is virtually silent, but the RAID crate was going to be an issue. It sounds like and aircraft taking off as the disks spin up.

A change of kit was needed….

I decided the best option was to move to a NAS, thus allowing the potentially noisy disks to be anywhere in the house. So I purchased a Netgear ReadyNAS 104. It shows how price have dropped over the past few years as this was about half the price of my StarTech RAID, and holds well over twice as much and provide much more functionality. I wait to see if it is reliable, only time will tell!

So I popped the NAS on the LAN and started to copy over content from the RAID crate, at the same time (and this seems was the mistake) reconfiguring MCE to point at the NAS. All seemed OK, MCE reconfigured and background copies running, until I tried to watch live TV. MCE said it was trying to find a tuner, I waited. In the end I gave up and went to bed, assuming all would be OK in the morning when the media copy was finished and I could reboot the PC.

Unfortunately it was not, after a reboot it still said it could find no tuner. if I tried to rescan for TV channels it just hung (for well over 48 hours, I left it while I went away). All the other functions of MCE seemed fine. I tried removing the USB tuners, both physically and un-installing the drivers, it had not effect. I had corrupted the MCE DB it seemed, something I had done before looking back at older posts.

In the end I had to reset MCE as detailed on Ben Drawbaugh’s blog. Basically I deleted the contents of c:\programdata\microsoft\ehome  and reran the MCE Live TV setup wizard. I was not bothered over my channel list order, or series recording settings, so I did not bother with mcbackup for the backup and restore steps.

Once this was done the tuners both worked again, though the channel scan took a good hour.

Interestingly I had assume clearing out the ehome folder would mean I lost all my MCE settings including the media library settings, but I didn’t my MCE was still pointing at the new NAS shares so a small win.

One point I had not considered over the move to a NAS, is that MCE cannot record TV to a network shares. Previously I had written all media to the locally attached RAID crate. The solution was to let MCE save TV to the local C:, but use a scheduled job to run ROBOCOPY to move the files to the NAS over night. Can’t see why it shouldn’t work, again only time will tell.

Update:

Forgot to mention another advantage of moving to the NAS. Previously I had to use the Logitech media server to serve music to my old Roku 1000 unit connected to my even older HiFi, now the Roku can use the NAS directly, thus making the system setup far easier

Azure Service Bus Event Hub Firewall Port

I’m investigating the Azure Service Bus Event Hub using the getting started tutorial and I didn’t seem to be able to receive any data. It turns out that our firewall was blocking an outbound port. After some investigation I found a post which hinted at a port for the on premise service bus. Our IT guys kindly enabled the outbound port 5671 and I now can receive data from the event hub.

For completeness the following site has details of the other firewall ports required for service bus : http://msdn.microsoft.com/en-us/library/ee732535.aspx

Internet of Things (IoT): Gadgeteer and Service Bus

Internet of Things seems to bring together two of my favourite topics: Gadgeteer and Service Bus. Whilst researching IoT I came across an article in MSDN magazine written by Clemens Vasters (http://msdn.microsoft.com/en-us/magazine/jj190807.aspx). This article is from July 2012 and things have moved on a little since then, but the fact that he  has Gadgeteer talking to service bus meant that I had to give it a go myself. The first port of call was the previous article (http://msdn.microsoft.com/en-us/magazine/jj133819.aspx - note the link is wrong in the current article). This explains the architecture that the sample is based upon using service bus topics to send commands to the device and a different topic to allow the device to send data. There is also a provisioning service that allows the devices to be initialised with the correct configuration. this provisioning service also configures up the service bus access control service (ACS) to allow each device to have its own security key. This means you can turn off devices using ACS.

Before you start take a look at the Service Bus Explorer as this is a useful tool when you are trying to diagnose why things aren’t working.

As I’m using a GHI Electronics Fez Spider main board I am using the .Net Micro-Framework 4.2. Upgrading the project to 4.2 had a couple of errors which needed resolving. Firstly, you will need to change GetJoystickPosition to GetPosition; secondly, change ConvertBase.ToBase64String to Convert.ToBase64String. This allowed me to run the project on my Gadgeteer board. However, I kept getting an error whenever I tried to call the provisioning service. I kept getting Bad Request. I immediately assumed that my configuration was wrong but after a bit of searching and then turning WCF tracing on I found that the service could not load the service bus assembly so I removed and the re-added it to solve the problem. As I mention configuration its probably a good idea to say what each of the settings in the provisioning service is used for:

sharedSignature : Go to https://manage.windowsazure.com/ and login. Click on service bus and then select the service bus you are using. on the bottom menu click the Connecting Information button. This will popup a configuration window. there are two keys in here. The first is part of the connectionstring and is under the sas section. Copy the connection string and find the key. This is the sharedSignature for this configuration setting.

servicebusNamespace: This is the name of the service bus as it appears in the management portal.i.e. sb://<servicebusNamespace>.servicebus.windows.net

managementKey: In the same connection information popup where you found the shared signature there is a section at the bottom labelled ACS. the managementkey is the Default Key

Microsoft.ServiceBus.ConnectionString: I used the connection string that appears in the SAS section of the connection information popup.

The other configuration you need to do is to change the url for the provisioning service. This is hard coded on the Gadgeteer board and is located in Program.cs, serverAddress variable in the ServiceBusApp project.

The provisioning service should be ready to go. However, I had problems connecting to the service from the Gadgeteer board as I kept receiving NotSupportedException each time I called GetRequestStream. This was due to an issue with the Ethernet configuration when trying to connect over https. This can be solved by updating the ssl seed using the Fez Config tool (https://www.ghielectronics.com/community/forum/topic?id=13927). This is done by clicking the Deployment (Advanced) button and the clicking Update SSL Seed.

image

Once complete I could then connect to the provisioning service. The provisioning service should only be called once per device and it is up to the device to store its configuration in a persistent store. This did not appear to be working on my device. Some of the settings were being but the topic urls were not. I changed the type from a URI to a string and the persistence then seemed to work and I only went to provision once. Each time the provisioning service is called a new subscription is create and a new access control identity and rule are also created.

With all this fixed I could now send messages, but I could not see them. this was because I didn’t have a subscriber to the topic where the data was published. This is easily resolved by creating one, but it will only receive new messages. Any messages sent before the subscription is created will be lost.

The provisioning service also has a web page that allows you to send commands to each device. It will broadcast a message to all devices by putting a single message into the devices topic and it sets the Broadcast property to be true. During provisioning the subscription that is created has a SQL Filter applied which allows the subscription to only receive messages that are targeted specifically at the device or if they are broadcast. The web page puts a message into the topic to tell the device to set its temperature to a specific value.The device should be listening for messages to its subscription and will act on the command once it is received.

The device never seemed to receive the message even though the Service Bus Explorer showed that the message was waiting in the queue. Whenever we tried to connect to the subscription “Bad Request” was being returned. After investigation is turns out that the sample only ever sets the event topic uri and not the devices topic uri. When we try and retrieve the device commands we are trying to connect to the events topic which is not a subscription. The sample needs modifying in Microsoft.ServiceBus.Micro project in the MessagingClient class. I added an extra Uri to the constructor and modified the CreateReceiveRequest and CreateLockRequest methods to use this Uri.

The final thing I changed was the command that is sent from the web page and how it was received:

The sender code in Default.aspx.cs in the BackEndWebrole Project

deviceSender.Broadcast(new Dictionary<string, object> { { "Temperature", this.TextBox1.Text } }, "SetTemperature");

And the receiver code in Program.cs in the ServiceBusApp project:

switch (commandType)
{
      case "SetTemperature":
      if (cmd.Properties.Contains("Temperature"))
      {
               this.settings.TargetTemperature = double.Parse((string)cmd.Properties["Temperature"]);
               StoreSettings(this.settings);
     }
     break;
}

I now have a Gadgeteer device talking to the service bus with the ability to send data and receive commands. My next steps are to create a webjob to process the event data (see my previous post) and also look into event hubs.

Windows Home Server 2011 Backup of UEFI/GPT Windows 8.1

Since upgrading to Windows 8.1 at home, I’ve had issues with backing up the computer using my Home Server (not that I helped by introducing a GPT disk and a UEFI rig at the same time…). The symptoms were that the client backup process appeared stuck at 1% progress for a long time before eventually failing.

I finally got a bit of time to look at the machines in question over the weekend and here are the issues that appeared to be causing problems for which I needed to find solutions:

  • The PC is a UEFI machine.
  • The PC uses a GPT hard disk.
  • A VSS error was appearing in the event log on the PC being backed up.
  • A CAPI2 error was appearing in the event log on the PC being backed up.

The first two issues were dealt with quickly by a hotfix for Home Server 2011: http://support.microsoft.com/kb/2781272. Note that the same issue also affects Windows Storage Server 2008 R2 Essentials and Windows Small Business Server 2011 Essentials. More information for these platforms can be found at http://support.microsoft.com/kb/2781278

The VSS error manifests as the event 8194 appearing in the event log of the PC that the backup attempt is run on:

VSS Error 8194

Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0x80070005, Access is denied.
. This is often caused by incorrect security settings in either the writer or requestor process.

Examination of the binary data for event 8194 indicates that ‘NT AUTHORITY\NETWORK SERVICE’ is account receiving the access denied error:

VSS Error Binary Data

Event 8194 is caused by the inability of one or more VSS system writers to communicate with the backup application VSS requesting process via the COM calls exposed in the IVssWriterCallback interface. The issue is not caused by a functional error in the backup application, but rather is a security issue caused by the selected VSS writers running as a service under the ‘Network Service’ (or ‘Local Service’) account, not the Local System or Administrator account. By default, in order for a Windows service to perform a COM activation it must be running as Local System or as a member of the Administrators group.

There are two ways to fix this issue; either change the account under which the erroring VSS writers are running from Network Service to Local System (at which point the service will be running with higher privileges than was originally designed), or add the Network Service account to the list of default COM activation permissions allowing this user account to activate the IVssWrtierCallback interface. This latter option is the preferred one to use and can be performed by completing the following steps:

  1. Run dcomcnfg to open the Component Services dialog.
  2. Expand Component Services, then Computers and then right-click on My Computer and select Properties:
    Component Services
  3. Select the COM Security tab and click the Edit Default… button in the Access Permissions area at the top of the dialog.
  4. Click Add and enter Network Service as the account to be added.
  5. Click OK and ensure that only the Local Access checkbox is selected.
  6. Click OK to close the Access Permission dialog, then clock OK to close the My Computer Properties dialog.
  7. Close the Component Services Dialog and restart the computer to apply the changes. Event 8194 should not longer appear in the event log for the Home Server backup.

The CAPI2 error manifests as the event 513 appearing in the event log of the PC that the backup attempt is run on:

CAPI2 Error 513

Cryptographic Services failed while processing the OnIdentity() call in the System Writer Object.
Details: AddLegacyDriverFiles: Unable to back up image of binary Microsoft Link-Layer Discovery Protocol.
System Error:
Access is denied.
.

The Microsoft Link-Layer Discovery Protocol binary is located at C:\Windows\System32\drivers\mslldp.sys. During the backup process, the VSS process running under the Network Service account calls cryptcatsvc!CSystemWriter::AddLegacyDriverFiles(), which enumerates all the driver records in Service Control Manager database and tries opening each one of them. The function fails on the MSLLDP record with an ‘Access Denied’ error.

The mslldp.sys configuration registry key is HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MsLldp and the binary security descriptor for the record is located at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MsLldp\Security.

Examining the security descriptor for mslldp using AccessChk (part of the SysInternals suite, available at http://technet.microsoft.com/en-us/sysinternals/bb664922) gives the following result (note: your security descriptor may differ from the permissions below):

C:\>accesschk.exe -c mslldp

Accesschk v5.2 - Reports effective permissions for securable objects
Copyright (C) 2006-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

mslldp
  RW NT AUTHORITY\SYSTEM
  RW BUILTIN\Administrators
  RW S-1-5-32-549
  R  NT SERVICE\NlaSvc

Checking the access rights of another driver in the same location gives the following result:

C:\>accesschk.exe -c mspclock

Accesschk v5.2 - Reports effective permissions for securable objects
Copyright (C) 2006-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

mspclock
  RW NT AUTHORITY\SYSTEM
  RW BUILTIN\Administrators
  R  NT AUTHORITY\INTERACTIVE
  R  NT AUTHORITY\SERVICE

In the case of mslldp.sys, there is no entry for ‘NT AUTHORITY\SERVICE’, therefore no service account will have access to the mslldp driver, hence the error.

To correct this issue, complete the following steps:

  1. From an elevated command prompt, run
    sc sdshow mslldp
    You should receive the following output, or something similar:
    D:(D;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BG)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SY)(A;;CCDCLCSWRPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SO)(A;;LCRPWP;;;S-1-5-80-3141615172-2057878085-1754447212-2405740020-3916490453)S:(AU;FA;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD)
    Note: Details on Security Descriptor Definition Language can be found at http://msdn.microsoft.com/en-us/library/windows/desktop/aa379567(v=vs.85).aspx
  2. Add the ‘NT AUTHORITY\SERVICE’ entry immediately before the S::(AU;FA;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD) entry and use this with the sdset option, for example using the output from the sdshow option above, this would be:
    sc sdset MSLLDP D:(D;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BG)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SY)(A;;CCDCLCSWRPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SO)(A;;LCRPWP;;;S-1-5-80-3141615172-2057878085-1754447212-2405740020-3916490453)(A;;CCLCSWLOCRRC;;;SU)S:(AU;FA;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD)
    Note: The above should all be on a single line when entering/pasting it; do not include line breaks in the command. It’s also important to use the output you receive from the command rather than that which I got as yours may be different.
  3. Check the access permissions again with:
    accesschk.exe -c mslldp
    You should now see a list of permissions that includes ‘NT AUTHORITY\SERVICE’:
    C:\>accesschk.exe -c mslldp
  4. Accesschk v5.2 - Reports effective permissions for securable objects
    Copyright (C) 2006-2014 Mark Russinovich
    Sysinternals - www.sysinternals.com

    mslldp
      RW NT AUTHORITY\SYSTEM
      RW BUILTIN\Administrators
      RW S-1-5-32-549
      R  NT SERVICE\NlaSvc
      R  NT AUTHORITY\SERVICE

  5. Now that the ‘NT AUTHORIT\SERVICE’ permission has been added, Network Service should be able to access the mslldp.sys driver file.

Following the above fixes, my computer is now being successfully backed up using Home Server 2011.

Getting the Typemock TFS build activities to work on a TFS build agent running in interactive mode

Windows 8 store applications need to be built on a TFS build agent running in interactive mode if you wish to run any tests. So whilst rebuilding all our build systems I decided to try to have all the agents running interactive. As we tend to run one agent per VM this was not going to be a major issue I thought.

However, whilst testing we found that any of our builds that use the Typemock build activities failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was

 

Exception Message: Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\TypeMock' is denied. (type UnauthorizedAccessException)
Exception Stack Trace:    at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str)
   at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions)
   at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck)
   at Configuration.RegistryAccess.CreateSubKey(RegistryKey reg, String subkey)
   at TypeMock.Configuration.IsolatorRegistryManager.CreateTypemockKey()
   at TypeMock.Deploy.AutoDeployTypeMock.Deploy(String rootDirectory)
   at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()
   at TypeMock.CLI.Common.TypeMockRegisterInfo..ctor()   at System.Activities.Statements.Throw.Execute(CodeActivityContext context)
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

 

So the issue was registry access. Irrespective of whether running interactive or as a service I used the same domain service account, which was a local admin on the build agent. The only thing that changed as the mode of running.

After some thought I focused on UAC being the problem, but disabling this did not seem to fix the issue. I was stuck or so I thought.

However, Robert Hancock unknown to me, was suffering a similar problem with a TFS build that included a post build event that was failing to xcopy a Biztalk custom functoid DLL to ‘Program Files’. He kept getting an ‘exit code 4 access denied’ error when the build agent was running interactive. Turns out the solution he found on Daniel Petri Blog also fixed my issues as they were both UAC/desktop interaction related.

The solution was to create a group policy for the build agent VMs that set the following

  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode - Set its value to Elevate without prompting.
  • User Account Control: Detect application installations and prompt for elevation - Set its value to Disabled.
  • User Account Control: Only elevate UIAccess applications that are installed in secure locations - Set its value to Disabled.
  • User Account Control: Run all administrators in Admin Approval Mode - Set its value to Disabled.

Once this GPO was pushed out to the build agent VMs and they were rebooted my Typemock based build and Robert Biztalk builds all worked as expected