The blogs of Black Marble staff

Test-SPContentDatabase False Positive

I was recently performing a SharePoint 2013 to 2016 farm upgrade and noticed an interesting issue when performing tests on content databases to be migrated to the new system.

As part of the migration of a content database, it’s usual to perform a ‘Test-SPContentDatabase’ operation against each database before attaching it to the web application. On the farm that I was migrating, I got mixed responses to the operation, with some databases passing the check successfully and others giving the following error:

PS C:\> Test-SPContentDatabase SharePoint_Content_Share_Site1

Category        : Configuration
Error           : False
UpgradeBlocking : False
Message         : The [Share WebSite] web application is configured with
                  claims authentication mode however the content database you
                  are trying to attach is intended to be used against a
                  windows classic authentication mode.
Remedy          : There is an inconsistency between the authentication mode of
                  target web application and the source web application.
                  Ensure that the authentication mode setting in upgraded web
                  application is the same as what you had in previous
                  SharePoint 2010 web application. Refer to the link
                  "" for more
Locations       :

This was interesting as all of the databases were attached to the same content web application, and had been created on the current system (I.e. not migrated to it from an earlier version of SharePoint) and therefore should all have been in claims authentication mode. Of note also is the reference to SharePoint 2010 in the error message, I guess the cmdlet hasn’t been updated in a while…

After a bit of digging, it turned out that the databases that threw the error when tested had all been created and some initial configuration applied, but nothing more. Looking into the configuration, there were no users granted permissions to the site (except for the default admin user accounts that had been added as the primary and secondary site collection administrators when the site collection had been created), but an Active Directory group had also been given site collection administrator permissions.

A quick peek at the UserInfo table for the database concerned revealed the following (the screen shot below is from a test system used to replicate the issue):

UserInfo Table

The tp_Login entry highlighted corresponds to the Active Directory group that had been added as a site collection administrator.

Looking at Trevor Seward’s blog post ‘Test-SPContentDatabase Classic to Claims Conversion’ blog post showed what was happening. When the Test-SPContentDatabase cmdlet runs, it’s looking for the first entry in the UserInfo table that matches the following rule:

  • tp_IsActive = 1 AND
  • tp_SiteAdmin = 1 AND
  • tp_Deleted = 0 AND
  • tp_Login not LIKE ‘I:%’

In our case, having an Active Directory Group assigned as a site collection administrator matched this set of rules exactly, therefore the query returned a result and hence the message was being displayed, even though the database was indeed configured for claims authentication rather than classic mode authentication.

For the organisation concerned, having an Active Directory domain configured as the site collection administrator for some of their site collections makes sense, so they’ll likely experience the same message next time they upgrade. Obviously in this case it was a false positive and could safely be ignored, and indeed attaching the databases that threw the error to a 2016 web application didn’t generate any issues.

Steps to reproduce:

  1. Create a new content database (to keep everything we’re going to test out of the way).
  2. Create a new site collection in the new database adding site collection administrators as normal.
  3. Add a domain group to the list of site collection administrators.
  4. Run the Test-SPContentDatabase cmdlet against the new database.

SharePoint Crawl Rules Appears to Ignore Some URL Protocols

I recently came across an issue relating to crawling people information in SharePoint and the use of crawl rules to exclude certain content.

The issue revolved around a requirement to exclude content contained within peoples’ MySites, but include user profile information so that people searches could still be conducted. The following crawl rule had been configured and was successfully excluding MySite content, but was also excluding the user profile data (crawled using the sps3s:// protocol):

URL Exclude or Include* Exclude

Using the crawl rule test facility indicated that while SharePoint treats http:// and https:// differently, https:// and sps3s:// appear to be treated the same as far as crawling is concerned, so if the above crawl rule is in place, items in the MySite root site collection, both with an https:// and sps3s:// prefix, will not be crawled, and therefore user profile data and people search will not be available:

Crawl rule test

[Screen shot from lab SharePoint 2010 system. however the same tests have been performed against SharePoint 2013 and 2016 with the same results]

In fact what is happening is that the sps3s:// prefix tells SharePoint which connector to use, and in the case of people search, this is translated into a call to a web service at the host specified, i.e., so the final call that is made is in fact to an https:// prefix, hence the reason that the people data is not crawled.

Replacing the above crawl rule with the following rule corrects the issue allowing people data stored in the MySite root site collection to be indexed and therefore be available for users to search:

URL Exclude or Include* Exclude

Configuring SharePoint 2013 Apps and Multiple Web Applications on SSL with a Single IP Address


Traditionally the approach to multiple SSL IIS websites hosted on a server involved multiple sites each with its own certificate bound to a single IP address/port combination. If you didn’t mind using non standard SSL ports, then you could use a single IP address on the server, but the experience was not necessarily pleasant for the end user. Assuming you wanted to use the standard SSL port (443), the servers in the farm could potentially consume large numbers of IP addresses, especially is using large numbers of websites and large numbers of web front end servers. This approach also carried over to SharePoint where multiple SSL web applications were to be provisioned.

Using a wildcard SSL certificate allows multiple SSL web applications (IIS websites) to be bound to the same IP address/port combination as long as host headers are used to allow IIS to separate the traffic as it arrives. This could be achieved because IIS uses the same certificate to decrypt traffic no matter what the URL that is being requested (assuming they all conform to the domain named in the wildcard certificate) and the host header then allows IIS to route the traffic appropriately.

With the introduction of SharePoint 2013 apps however, there is a requirement for the use of at least two different SSL certificates on the same server; one (in the case of a wildcard, or more if using individual certificates) for the content web applications and a second for the SharePoint app domain that is to be used (the certificate for the apps domain must be a wildcard certificate). The current recommended configuration is to use a separate apps domain (for example if the main wildcard certificate references *, the apps domain should be something along the lines of * rather than a subdomain of the main domain as a subdomain could lead to cookie attached on other non-SharePoint web based applications in the same domain space).

For some organisations, the proliferation of IP addresses for the traditional approach to SSL is not an issue. For some organisations however, either the number of IP addresses that they have available is limited, or they wish to reduce the administration overhead involved in the use of multiple IP addresses servers hosting SharePoint. Other scenarios also encourage the use of a single IP address on a server, for example the use of Hyper-V replication, where the system can handle the reassignment of the primary IP address of the server on failover, but additional IP addresses require that some automation be put in place to configure the additional IP addresses upon failover.

Note: The following is not a step-by-step set of instructions for configuring apps; there are a number of good blog posts (e.g. and of course the TechNet documentation at to lead you through the required steps. This post also borrows heavily from ‘How To Configure SharePoint 2013 On-Premises Deployments for Apps’ by Chris Whitehead – read that article for more in-depth discussion of the configuration required for SharePoint Apps.

SharePoint Apps Requirements

To configure Apps for SharePoint 2013 using a separate domain (rather than a subdomain) for apps, the following requirements must be met:

  • An App domain needs to be determined. If our main domain is ‘’, our apps domain could be ‘’ for example. If SharePoint is available outside the corporate network and apps will be used, the external domain will need to be purchased.
  • An Apps domain DNS zone and wildcard CNAME entry.
  • An Apps domain wildcard certificate.
  • An App Management Service Application and a Subscription Settings Service Application created, configured and running. Note that both of these Service Applications should be in the same proxy group.
  • App settings should be configured in SharePoint.
  • A ‘Listener’ web application with no host header to receive apps traffic.

It is also assumed the the following are already in place:

  • A functional SharePoint 2013 farm.
  • At least one functional content web application configured to use SSL and host header(s).

Infrastructure Configuration

Each App instance is self-contained with a unique URL in order to enforce isolation and prevent cross domain JavaScript calls through the same-origin policy in Web browsers. The format the the App URL is:


The App domain to be used should be determined based on domains already in use.

Instructions for creating a new DNS zone, the wildcard DNS CNAME entry and a wildcard certificate can be found at As we’re planning to use a single IP address for all web applications and Apps, point the CNAME wildcard entry at either the load balanced IP address (VIP) in use for the content web applications, or the IP address of the SharePoint server (if you’ve only got one).

Farm Configuration

To be able to use Apps within SharePoint 2013, both the App Management Service Application and the Subscription Settings Service Application need to be created, configured and running and the App prefix and URL need to be configured. Instructions for getting these two Service Applications configured and running are again available at

In addition to the Service Application configuration, a ‘Listener’ web application with no host header is required to allow traffic for SharePoint Apps to be routed correctly. Without the ‘Listener’ web application with no host header, assuming all other web applications in the farm are configured to use host headers we have the following scenario:

SP 2013 Farm with Apps - No Listener Web App

[diagram from ‘How To Configure SharePoint 2013 On-Premises Deployments for Apps’]

In the above diagram, the client request for the SharePoint App URL DNS lookup is performed which points to the NLB address for the content web applications and traffic is therefore directed to the farm. The host header requests do not however match any of the web applications configured on the farm so SharePoint doesn’t know how to deal with the request.

We could try configuring SharePoint and IIS so that SharePoint App requests are sent to one of the existing web applications, however when using SSL we cannot bind more than one certificate to the same IIS web site and we cannot have an SSL certificate containing multiple domain wildcards. With non-SSL web applications, SharePoint could, in theory, do some clever routing by using the App Management Service Application to work out which web application hosts the SharePoint App web if one of the existing web applications were configured with no host header (I see another set of experiments on the horizon…).

To get around this issue with SSL traffic, a ‘Listener’ web application needs to be created. This web application should have no host header and therefore acts as a catchall for traffic not matching any of the other host headers configured. Note that if you already have a web application without a host header in SharePoint, you’ll have to recreate it with a host header before SharePoint will allow you to create another one. This results in the following scenario:

SP 2013 Farm with Apps - Listner App Config

[diagram from ‘How To Configure SharePoint 2013 On-Premises Deployments for Apps’]

The client request for the SharePoint App URL DNS lookup is performed which points to the NLB address for the content web applications and traffic is therefore directed to the farm. This time however, there is a ‘Listener’ web application that receives all traffic not bound for the main content web applications and internally the SharePoint HTTP module knows where to direct this traffic by using the App Management Service Application to work out where the SharePoint App web is located.

Note: The account used for the application pool for the ‘Listener’ web application must have rights to all the content databases for all of the web applications to which SharePoint Apps traffic will be routed. You could use the same account/application pool for all content web applications, but I’d recommend granting the rights per web applications as required using the ‘SPWebApplication.GrantAccessToProcessIdentity’ method instead.

As we need to use a different certificate on this ‘Listener’ web application, it used to be the case that it would have to be on its own IP address, however with Windows Server 2012 and 2012 R2, a new feature, Server Name Identity (SNI), was introduced that allows us to get around this limitation. To configure the above scenario using a single server IP address, the following steps need to be completed (note that in my scenario, I’ve deleted the default IIS web site; if it is only bound to port 80, then it should not need to be deleted):

  1. Open IIS Manager on the first of the WFE servers.
  2. Select the first of the content web applications and click ‘Bindings…’ in the actions panel at the right of the screen.
  3. Select the HTTPS binding and click ‘Edit…’
  4. Ensure that the ‘Host name’ field is filled in with the host header and that the ‘Require Server Name Indication’ checkbox is selected.
  5. Ensure that the correct SSL certificate for the URL in the ‘Host name’ field is selected.
  6. Ensure that ‘All Unassigned’ is selected in the ‘IP address’ field.
  7. Click OK to close the binding dialog and close the site bindings dialog.
  8. Repeat the above steps for all of the other content web applications with the exception of the ‘Listener’ web app.
  9. Ensure that the bindings for the ‘Listener’ web application do not have a host header. You will not be able to save the binding details for this web application if ‘Require Server Name Indication’ is selected, so leave it unselected for this web application. Select the Apps domain certificate for this web application.
  10. Start any required IIS SharePoint web applications that are stopped.
  11. Repeat the above steps for all of the other WFE servers.

The result of the steps above is that all of the content web applications with the exception of the ‘Listener’ web application should have a host header, be listening on port 443 on the ‘all unassigned’ IP address, have ‘Require SNI’ selected and have an appropriate certificate bound to the web application. The ‘Listener’ web application should have neither a host header, nor have ‘Require SNI’ selected, be listening on port 443 on the ‘all unassigned’ IP address and have the Apps domain certificate bound to it. This configuration allows:

  • Two wildcard certificates to be used, one for all of the content web applications, the other for the Apps domain bound to the ‘Listener’ web application with all applications listening for traffic on the same IP address/port combination, or
  • Multiple certificates to be bound, one per content web application, plus the Apps domain wildcard to be bound to the ‘Listener’ web application with all applications listening for traffic on the same IP address/port combination, or
  • Some combination of the above.

There are some limitations to using SNI, namely that a few browsers don’t support the feature. At the time of writing, IE on Windows XP (but then, you’re not using Windows XP, are you?) and the Android 2.x default browser don’t seem to support it, as don’t a few more esoteric browsers and libraries. All of the up-to-date browsers seem to work happily with it.

SharePoint 2013: Creating Managed Metadata Columns that allow Fill-In Choices

This is a relatively quick post. There’s a fair bunch of stuff written about creating columns in SharePoint 2013 that use Managed Metadata termsets. However, some of it is a pain to find and then some. I have had to deal with two frustrating issues lately, both of which boil down to poor sharepoint documentation.

 Wictor Wilén wrote the post I point people at for most stuff on managed metadata columns, but this time the internet couldn’t help.

First of all, I wanted to create a custom column which used a termset to hold data. This is well documented. However, I wanted to allow fill-in choices and could I make that work? My termset was open, my xml column definition looked right, but no fill-in choice. Update the column via the web UI and I could turn fill-in on and off with no problem. In the end, I examined the column with PowerShell before and after the change. It turns out (and this is not the only place they do it) the UI stays the same, but the settings changed in the background are different. For metadata columns the FillInChoice property is ignored – you must add a custom property called Open:

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="">
       DisplayName="Feature Group"
          <Value xmlns:q6="" p4:type="q6:string" xmlns:p4="">{b7406e8e-47aa-40ac-a061-5188422a58d6}</Value>
          <Value xmlns:q5="" p4:type="q5:boolean" xmlns:p4="">true</Value>
          <Value xmlns:q7="" p4:type="q7:boolean" xmlns:p4="">false</Value>


Whilst we’re on the subject, if you want metadata fields to be correctly indexed by search, the hidden field MUST follow the strict naming convention of <fieldname>TaxHTField0.

When using the content by search web part, what gets indexed by search is the text of the term in the column within the content type. However, if you enter a search query wanting to match the value of the current page or item what gets pushed into the search column is the ID of the term (a GUID), not the text. It is possible to match the GUID against a different search managed property, but that only gets created if you name your hidden field correctly. Hat-tip to Martin Hatch for that one, in an obscure forum post I have not been able to find since.

Safely modify SharePoint 2013 Web.Config files using PowerShell

One of the things we learn early in our SharePoint careers was not to manually edit the web.config files of a web application. SharePoint involves multiple servers and has its own mechanisms for managing web.config updates.

Previously, I’ve created xml files with web.config modifications and copied those to each WFE. Those changes are merged into the initial web.config by SharePoint.

I’ve always been vaguely aware of there being a better way, but never needed to track it down from an IT point of view. Last week, however we wanted to change a setting to enable blobcache on the servers hosting a particular web application so decided to use the opportunity to figure out a ‘best way’ to do this.

Enter, stage left, the SPWebConfigModification class (note, that link is to SharePoint 2010, but SP2013 works the same way).

We can create a collection of configuration changes that get applied by SharePoint via a timer job. That collection is generated through code and is persistent – we can add and remove changes over time, but they are stored in the farm config and will get applied each time we add a new server or update the web application IIS files.

A search of the web turned up an article by Ingo Karstein that had the right approach but brute forced everything by referencing SharePoint DLLs directly. A bit of experimentation showed that we didn’t need to do this – SharePoint PowerShell has everything we need.

Sample code is below. The code will first enumerate the collection of SPWebConfigurationModifications for our web application and remove any that have the same owner value as our script uses. It then adds a new modification to set the value of an attribute (the blobcache ‘enabled’ setting) to true. More modifications can be added into the script and these will be added to the collection. The mods are applied by SharePoint in sequence.

It needs some tidying but it works well. Read the documentation on how the modifications work carefully – it’s possible to work with an element or an attribute and you can add and remove stuff. Remember that other solutions may be adding modifications as well – make sure you don’t remove those.

As always, this stuff is provided ‘as is’ and I am not responsible for the damage you can wreak on your SharePoint farm. Test first and make sure you understand what this code does before using on production.

# Load SharePoint PowerShell PSSnapIn and the main SharePoint .net library
Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

#set a few variables for the script
$owner = "NimbusPublishingModifications"
$webappurl = "https://share2013.proj.local/"

#Get the web application we want to work with
$webapp = get-spwebapplication $webappurl

#get the Foundation Web Application Service (the one that puts the content web apps on servers)
$farmservices = $webapp.Farm.Services | where { $_.TypeName -eq "Microsoft SharePoint Foundation Web Application" }

#get the list of existing web config modifications for our web app
$existingModifications = @();
$webapp.WebConfigModifications | where-object { $_.Owner -eq $owner } | foreach-object { $existingModifications = $existingModifications + $_}
#remove any modofications that match our owner value (i.e. strip out our old mods before we re-add them)
$existingModifications | foreach-object{ $webapp.WebConfigModifications.Remove($_) }

#create a new web config modification
$newModification = new-object "Microsoft.SharePoint.Administration.SPWebConfigModification"
$newModification.Path = "configuration/SharePoint/BlobCache"
$newModification.Name = "enabled"
$newModification.Sequence = 0
$newModification.Owner = $owner
$newModification.Type = 1           #for the enum value "SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode"
$newModification.Value = "true"

#add our web config modification to the stack of mods that are applied

#trigger the process of rebuildig the web config files on content web applications

Workflow Manager 1.0 doesn't successfully install on Windows Server 2012 R2 unless VC Redist 11.0 or 12.0 already present

There seems to be an issue installing Workflow Manager 1.0 refresh on Windows Server 2012 R2. Upon completion, when clicking through to configure Workflow Manager, you are informed that neither Service Bus 1.0, nor (obviously) CU1 for Service Bus 1.0 has been installed.

Digging into the event log on the machine in question, this shows that VC Redist 11.0 or greater is required, and this is not installed automatically by the WebPI.

On Windows Server 2012, VC redist 12.0 is installed automatically by WebPI and the installation of Workflow Manager 1.0 Refresh completes successfully.

Obviously the solution is to install VC redist 11.0 or 12.0 before attempting to install Workflow Manager 1.0 refresh on Windows Server 2012 R2.

Declaratively create Composed Looks in SharePoint 2013 with elements.xml

This is really a follow-up to my earlier post about tips with SharePoint publishing customisations. Composed looks have been a part of a couple of projects recently. In the first, a solution for on-premise, we used code in a feature receiver to add a number of items to the Composed Looks list. In the second, for Office 365, a bit of research offered an alternative approach with no code.

What are Composed Looks

A composed look is a collection of master page, colour scheme file, font scheme file and background image. There is a site list called Composed Looks that holds them, and they are shown in the Change the Look page as the thumbnail options you can choose to apply branding in one hit.

In order to get your new composed look working there are a few gotchas you need to know:

  1. When you specify a master page in your composed look, there must be a valid .preview file with the same name. This file defines the thumbnail image – if you look at an existing file (such as seattle.preview or olso.preview) you will find html and styling rules, along with some clever token replacement that references colours in the color scheme file.
  2. A composed look must have a master page and colour scheme (.spcolor) file, but font scheme and background image are optional.
  3. When using sites and site collections, files are split between local and root gallery locations:
    1. The Composed look List is local to the site – it doesn’t inherit from the parent site.
    2. Master pages go in the site Master Page Gallery.
    3. Spcolor, sptheme and image files go in the site collection master page gallery.

If any of the files you specify in your composed look don’t exist (or you get the url wrong), the thumbnail won’t display. If any of the files in your composed look are invalid, the thumbnail won’t display. If your master page exists but has no .preview file, the thumbnail won’t display. Diligence is important!

Adding Composed Looks using Elements.xml

In researching whether this was indeed possible, I came across an article by Tom Daly. All credit should go to him – I’ve simply tidied up a bit around his work. I already knoew that it was possible to create lists as part of a feature using only the elements.xml, and to place items in that new list. I hadn’t realised that adding items to an existing list also works.

In Visual Studio 2013 the process is easy – simply add a new item to your project, and in the Add New Item dialog select Office/SharePoint in the left column and Empty Element in the right. Visual Studio will create the new element with an Elements.xml ready and waiting for you.

To create our composed looks we simply edit that elements.xml file.

First we need to reference our list. As per Tom’s post, we need to add a ListInstance element to our file:

<ListInstance FeatureId="{00000000-0000-0000-0000-000000000000}" TemplateType="124" Title="Composed Looks" Url="_catalogs/design" RootWebOnly="FALSE">

That xml points to our existing list, and the url is a relative path so will reference the list in the current site for our feature, which is what we want.

Now we need to add at least one item. To do that we need to add Data and Rows elements to hold however many Row elements we have items:

<ListInstance FeatureId="{00000000-0000-0000-0000-000000000000}" TemplateType="124" Title="Composed Looks" Url="_catalogs/design" RootWebOnly="FALSE">

Then we add the following code for a single composed look:

          <Field Name="ContentTypeId">0x0060A82B9F5D2F6A44A6A5723277B06731</Field>
          <Field Name="Title">My Composed Look</Field>
          <Field Name="_ModerationStatus">0</Field>
          <Field Name="FSObjType">0</Field>
          <Field Name="Name">My Composed Look</Field>
          <Field Name="MasterPageUrl">~site/_catalogs/masterpage/MyMasterPage.master, ~site/_catalogs/masterpage/MymasterPage.master</Field>
          <Field Name="ThemeUrl">~sitecollection/_catalogs/theme/15/MyColorTheme.spcolor, ~sitecollection/_catalogs/theme/15/MyColorTheme.spcolor</Field>
          <Field Name="ImageUrl"></Field>
          <Field Name="FontSchemeUrl"></Field>
          <Field Name="DisplayOrder">1</Field>

There are two parts to the url fields – before the comma is the path to the file and after the comma is the description shown in the list dialog. I set both to the same, but the description could be something more meaningful if you like.

Note that the master page url uses ~site in the path, whilst the theme url uses ~sitecollection. Both of these will be replaced by SharePoint with the correct paths for the current site or site collection.

Note also that I have only specified master page and colour theme. The other two are optional, and SharePoint will use the default font scheme and no background image, respectively. The colour theme would appear to be mandatory because it is used in generating the thumbnail image in conjunction with the .preview file.

The DisplayOrder field affects where in the list of thumbnails our composed look appears. The out-of-the-box SharePoint themes start at 10 and the current theme is always 0. If more than one item has the same DisplayOrder they are displayed in the same order as in the composed looks list. Since I want my customisations to appear first I usually stick a value of 1 in there.

I have removed a couple of fields from the list that Tom specified, most notably the ID field, which SharePoint will generate a value for and (I believe) should be unique, so better to let it deal with that than potentially muck things up ourselves.

Deploying the Composed Look

Once we’ve created our elements.xml, getting the items deployed to our list is easy – simply create a feature and add that module to it. There are a few things I want to mention here:

  1. Tom suggests that the declarative approach does not create items more than once if a feature is reactivated. I have not found this to be the case – deactivate and reactivate the feature and you will end up with duplicate items. Not terrible, but worth knowing.
  2. You need a site level feature to add items to the composed looks list. As some of the things that list item references are at a site collection level, I suggest the following feature and module structure:
    1. Site Collection Feature
      1. Module: Theme files, containing .spcolor, .spfont and background image files. Deploys to _catalogs/Theme/15 folder.
      2. Module: Stylesheets. Deploys to Style Library/Themable folder or a subfolder thereof.
      3. Module: CSS Images. Deploys to Style Library/Themable folder or a subfolder thereof. Separating images referenced by my CSS is a personal preference as I like tidy VS projects!
      4. If you have web parts or search display templates I would put those in the site collection feature as well.
    2. Site Feature
      1. Module: Master pages. Contains .master and associated .preview files. Deploys to _catalogs/masterpage folder.
      2. Module: Page layouts. Contains .aspx layout files. Deploys to _catalogs/masterpage folder.
      3. Module: Composed Looks: Contains the list items in our elements.xml file. Deploys to Composed Looks list.

Speaking at NEBytes on February 19th

I’m pleased to have been asked to speak at NEBytes again – a great user group that meets in Newcastle. I’ll be speaking about customising SharePoint 2013 using master pages, themes and search templates, along the same lines as my recent blog  post.

It will be an unusual one for me, as I will spend most of the session inside Visual Studio showing how to create and deploy the customisations that can deliver really powerful solutions without needing to resort to writing code (other than for deployment).

The event on the 19th is in partnership with SUGUK and the other session of the night sounds really interesting too: Building social sharepoint apps using Yammer.

I’ve said before that I always enjoy visiting NEBytes. If you’re in the Newcastle area and are a developer or IT Pro I strongly recommend you find out more about them and consider attending.

See you there.

Six tips when deploying SharePoint 2013 masterpages, page layouts and display templates

I’ve been hat-swapping again since just before christmas (which explains the lack of Azure IaaS posts I’m afraid). I’ve been working on a large SharePoint 2013 project, most lately on customising a number of elements around publishing. Getting those custom elements into SharePoint from my solution raised a number of little snags, most of which were solved by the great internet hive mind. It took me a long time to find some of those fixes, however, so I thought I’d collect them here and reference the original posts where appropriate.

1. Overwrite existing files reliably

This has long been an old chestnut for as long as I have been working in SharePoint. Your solution deploys a file to the masterpage gallery or style library. You deploy an updated version and none of your changes are visible because SharePoint hasn’t replace the file with your new version. In previous versions careful use of things like ‘ghostable’ in library in the elements.xml when you deployed the file helped – files that are ghostable generally seem to be updated, unless you manually edit the file, thus ‘unghosting’ it.

In SharePoint 2013, however, we appear to have a new property that we can specify in our elements.xml for deployable files, ReplaceContent:

<File Path="myfile.aspx" Url="myfile.aspx" Type="GhostableInLibrary" ReplaceContent="TRUE" />

As far as I can tell, this does what it says on the tin. Overwrites existing files by default.

2. Provision Web Parts into page layouts safely as part of a feature

This is one I’d never personally tried before. I’ve seen many people struggling, pasting web part code into a masterpage or page layout and having problems during deployment. The way to do it (the ‘right’ way as far as I can know) is to use the feature to do it. When you list your masterpage or page layout in the elements.xml you can add a property that deploys a web part, AllUsersWebPart:

<File Path="myfile.aspx" Url="myfile.aspx" Type="GhostableInLibrary" ReplaceContent="TRUE" >
<AllUsersWebPart WebPartZoneID=”TopZone” WebPartOrder=”0”>

Simply specify the name of the web part zone in your page and it will be added during the deploy. The webpartorder setting should allow you to define where it appears. When adding multiple webparts I have had more success setting that to zero for each web part and just getting the order right. As you might have guessed, for multiple web parts, add multiple AllUsersWebPart sections.

But where’s the web part, I hear you cry! In that CDATA block, paste the XML for your web part. Getting that is easy – simply export the web part from SharePoint and paste the resulting XML straight in there. There are a couple of tweaks you may need to apply that I’ll list next.

3. Substitute ~ for &#126; in paths within web parts in CDATA blocks

This one stumped me for a while and I was fortunate to come across a post by Chris O’Brien that solved it for me. I was trying to add a custom Content By Search web part to a page. That web part had custom control and display templates specified, which reference the current site collection in their path (~sitecollection/_catalogs). The problem is that the tilda gets stripped out by SharePoint when the page is deployed, breaking the setting.

The solution turns out to be one of those typical off the wall ‘I would never have thought of that!’ solutions that crop up all the time with SharePoint: Swap the ~ character for it’s XML entity reference: &#126;.

<property name="GroupTemplateId" type="string">&#126;sitecollection/_catalogs/masterpage/Display Templates/Content Web Parts/MyTemplate.js</property>

4. Use <value> to include content in the Content Editor web part in CDATA blocks

Export a Content Editor web part and you will see that the HTML content that is displayed within it is in the Content element, wrapped in a CDATA block. The problem is that when deploying this web part into the page using the technique above you can’t nest a CDATA block within a CDATA block.

The solution? Change the CDATA wrapper to be the <value> element. The snag? I have found that I need to swap the < and > symbols for their HTML entity counterparts: &lt; and &gt;.

<Content xmlns=""><value>&lt;h2&gt;​My Content&lt;/h2&gt;</value></Content>

5. Provision Search Display Templates as draft and publish them with a feature receiver

This one is a bit contentious, as far as I can tell. I derived my (simple) approach from an article by Waldek Mastykarz. The crux of the matter is this: You can either edit the HTML part of a search display template or the javascript. The ‘correct’ way is another matter though. If you have publishing features enabled then when you save and publish the HTML file, SharePoint generates the javascript file with a triggered event receiver. If you don’t have publishing enabled, as far as I can tell only the javascript files are there and the event receiver doesn’t appear to be enabled.

So…  which way to jump? Well, in my case I am creating customisations that depend on publishing features, so I decided to deploy just the HTML file and let SharePoint generate the javascript. If I needed to use these things without publishing I may have extracted the javascript from my development sharepoint and deployed that.

The first part to my simple approach is to deploy the files as draft using the options available to me in elements.xml:

<File Path="MyTemplate.html" Url="MyTemplate.html" Type="GhostableInLibrary" Level="Draft" ReplaceContent="TRUE" />

I then use a fairly simple function that is called by the feature receiver on activition, once per file:

public static void CheckInFile(SPWeb web, string fileUrl)
    // get the file
    SPFile file = web.GetFile(fileUrl);
    // depending on the settings of the parent document library we may need to check in and/or (publish or approve) the file
    if (file.Level == SPFileLevel.Checkout) file.CheckIn("", SPCheckinType.MajorCheckIn);
    if (file.Level == SPFileLevel.Draft)
        if (file.DocumentLibrary.EnableModeration) file.Approve("");
        else file.Publish("");

If you look at the original article, the solution suggested by Waldek is jolly clever, but much cleverer that I needed for a couple of display templates.

6. Make your masterpages appear in ‘Change the look’ with a preview file

In the new SharePoint 2013 world site admins have a great deal of flexibility over how their site looks. I wanted to enable users of my custom masterpages to continue to use the theming engine – selecting their own colours and fonts – but to keep the custom masterpage I had built. Again, it’s actually really easy. Simply deploy a .preview file with the same name as your masterpage (e.g. mymaster.master and mymaster.preview). The .preview is actually a clever combination od setting, html and css that allows you to specify the default colour pallete file (.spcolor) and font file (.spfont) as well as draw a little preview of your page. I was lucky on that last one, as my look was the same as the default, so I simply copied seattle.preview.

I could go a step further in that I can create a Composed Look that would show my layout as a tile in the Change My Look UI, but that involves adding items to a SharePoint list and was more than I needed for this particular project. I will need to do that for my next one, however…

SharePoint 2013 on Windows Server 2012 R2 Preview and SQL Server 20143 CTP1

Following the recent release of Windows Server 2012 R2 Preview and SQL Server 2014 CTP1, I thought it would be an interesting experiment to see if I could get SharePoint 2013 running on the combination of these two previews. Most of the issues I encountered were around the product installers for SharePoint and the SharePoint pre-requisites:

  1. The SharePoint 2013 prereqinstaller.exe installer would not run and gave the error “This tool does not support the current operating system”.
  2. The SharePoint binary installer would insist that not all of the server features required by SharePoint had been installed.
  3. The SharePoint binary installer failed when installing the oserver.msi file.
    Followed by:
    Examination of the setup logs (located at C:\Users\<username>\AppData\Temp\SharePoint Server Setup(<date-time).log) showed the following error:
    ”Error: Failed to install product: C:\MediaLocation\SharePoint2013InstallationMedia\global\oserver.MSI ErrorCode: 1603(0x643).”

SQL Server 2014 CTP1 seemed to install and work fine, although I did experience a couple of crashes during the installation procedure.

The following are workarounds for the issues seen above:

Preparing the server manually instead of using the prereqinstaller.exe involves adding the required server features and then manually installing the SharePoint pre-req files.

To add the required server features, use the following PowerShell commands in an elevated PowerShell prompt:

Import-Module ServerManager

Add-WindowsFeature Net-Framework-Features,Web-Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-Dev,Web-Asp-Net,Web-Net-Ext,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-Digest-Auth,Web-Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Tools,Web-Mgmt-Console,Web-Mgmt-Compat,Web-Metabase,Application-Server,AS-Web-Support,AS-TCP-Port-Sharing,AS-WAS-Support, AS-HTTP-Activation,AS-TCP-Activation,AS-Named-Pipes,AS-Net-Framework,WAS,WAS-Process-Model,WAS-NET-Environment,WAS-Config-APIs,Web-Lgcy-Scripting,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer

If the server is not connected to the internet, the following PowerShell commands can be used (assuming that the installation media is available on D:\):

Import-Module ServerManager

Add-WindowsFeature Net-Framework-Features,Web-Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-Dev,Web-Asp-Net,Web-Net-Ext,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-Digest-Auth,Web-Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Tools,Web-Mgmt-Console,Web-Mgmt-Compat,Web-Metabase,Application-Server,AS-Web-Support,AS-TCP-Port-Sharing,AS-WAS-Support, AS-HTTP-Activation,AS-TCP-Activation,AS-Named-Pipes,AS-Net-Framework,WAS,WAS-Process-Model,WAS-NET-Environment,WAS-Config-APIs,Web-Lgcy-Scripting,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer –Source D:\sources\sxs

Scripts are available to download the SharePoint pre-reqs; the one I used is located at

I chose to install each of the pre-reqs manually, however the Windows Server App Fabric installer should be installed using the command line rather than the GUI as I couldn’t successfully get it installed with the options required using the GUI. To install Windows Server App Fabric, open an admin PowerShell console and use the following commands (assuming the installation file is located at c:\downloads):

$file = “c:\downloads\WindowsServerAppFabricSetup_x64.exe”

& $file /i CacheClient”,”CachingService”,”CacheAdmin /gac

Note the locations of the “ marks in the second command line, these should be around the commas.

Once this is installed, the Windows Server AppFabric update (AppFabric1.1-RTM-KB2671763-x64-ENU.exe) can also be installed. For reference, the other pre-reqs that I manually installed were:

  • MicrosoftIdentityExtensions-64.msi
  • setup_msipc_x64.msi
  • sqlncli.msi
  • Synchronization.msi
  • WcfDataServices.msi

In each of the above cases, I accepted the default installation options.

Following the installation of the SharePoint 2013 pre-reqs, the SharePoint 2013 binary installer insisted that not all of the required server features were installed. Shutting the server down and restarting it (sometimes twice) seemed to solve this issue.

To solve the issue experienced during the binary installation of SharePoint 2013, a modification of the oserver.msi file is required. This can be achieved using ‘Orca’. Orca is available as part of the Windows Software Development Kit (SDK) for Windows 8, which can be downloaded from

Once the SDK is installed, start Orca, then use it to open the oserver.msi file located in the ‘global’ folder of the SharePoint 2013 installation media (taking a backup of the original file first, of course…), then navigate to the ‘InstallExecusteSequence’ table and drop the ‘ArpWrite’ line:


Save the file over the original, the start the binary installation in the usual way.

Here’s a shot of SharePoint 2013 working on Windows Server R2 Preview with SQL Server 2014 CTP1:


Please note, all of the above is done entirely at your own risk and is for testing purposes only. Don’t even think of using it in production…