Unblocking a stuck Lab Manager Environment (the hard way)

This is a post so I don’t forget how I fixed access to one of our environments yesterday, and hopefully it will be useful to some of you.

We have a good many pretty complex environments deployed to our lab hyper-V servers, controlled by Lab manager. Operations such as starting, stopping or repairing those environments can take a long, long time, but this time we had one that was quite definitely stuck. The lab view showed the many servers in the lab with green progress bars about halfway across but after many hours we saw no progress. The trouble is, at this point you can’t issue any other commands to the environment from within the Lab Manager console – it’s impossible to cancel the operation and regain access to the environment.

Normally in these situations, stepping from Lab Manager to the SCVMM console can help. Stopping and restarting the VMs through SCVMM can often give lab manager the kick it needs to wake up. However, this time that had no effect. We then tried restarting the TFS servers to see if they’d got stuck, but that didn’t help either.

At this point we had no choice but to roll up our sleeves and look in the TFS database. You’d be surprised (or perhaps not) at how often we need to do that…

First of all we looked in the LabEnvironment table. That showed us our environment, and the State column contained a value of Repairing.

Next up, we looked in the LabOperation table. Searching for rows where the DataspaceId column value matched that of our environment in the LabEnvironment table showed a RepairVirtualEnvironment operation.

In the tbl_JobSchedule table we found an entry where the JobId column matched the JobGuid column from the LabOperation table. The interval on that was set to 15, from which we inferred that the repair job was being retried every fifteen minutes by the system. We found another entry for the same JobId in the tbl_JobDefinition table.

Starting to join the dots up, we finally looked in the LabObject database. Searching for all the rows with the same DataspaceId as earlier returned all the lab hosts, environments and machines that were associated with the Team Project containing the lab. In this table, our environment row had a PendingOperationId which matched that of the row in the LabOperation table we found earlier.

We took the decision to attempt to revive our stuck environment by removing the stuck job. That would mean carefully working through all the tables we’d explored and deleting the rows, hopefully in the correct order. As the first part of that, we decided to change the value of the State column in the LabEnvironment table to Started, hoping to avoid crashing TFS should it try to parse all the information about the repair job we were about to slowly remove.

Imagine our surprise, then, when having made that one change, TFS itself cleaned up the database, removed all the table entries referring to the repair environment job and we were immediately able to issue commands to the environment again!

Installing Windows 10 RSAT Tools on EN-GB Media-Installed Systems

This post is an aide memoir so I don’t have to suffer the same annoyance and frustration at what should be an easy task.

I’ve now switched to my Surface Pro 3 as my only system, thanks to the lovely new Pro 4 Type Cover and Surface Dock. That meant that I needed the Remote Server Administration Tools installing. Doing that turned out to be much more of an odyssey that it should have been and I’m writing this in the hope that it will allow others to quickly find the information I struggled to.

The RSAT tools download is, as before, a Windows Update that adds the necessary Windows Features to your installation. The trouble is, that download is EN-US only (really, Microsoft?!). If, like me, you used the EN-GB media to install you’re in a pickle.

Running the installed appears to work – it proceeds with no errors, albeit rather quickly – but the RSAT features were unavailable. I already had a US keyboard on my config (my pro keyboard is US), but that was obviously not enough. I added the US language, but still couldn’t get the installer to work.

I got more information on the problem by following the steps described in a TechNet article on using DISM to install Windows Updates. That led me to a pair of articles on the SysadminTips site about the installation problem, and how to fully add the US language pack to solve it.

It turns out that the EN-GB media doesn’t install the full US-EN language pack files, so when you add the US language it doesn’t add enough stuff into the OS to allow the RSAT tools. Frankly, that’s a mess and I hope Microsoft deal with the issue by releasing multi-language RSAT tools.

Optimising IaaS deployments in Azure Resource Templates

Unlike most of my recent posts this one won’t have code in it. Instead I want to talk about concepts and how you should look long and hard at your templates to optimise deployment.

In my previous articles I’ve talked about how nested deployments can help apply sensible structure to your deployments. I’ve also talked about things I’ve learned around what will successfully deploy and what will give errors. Nested deployments are still key, but the continuous cycle of improvements in Azure means I can change my information somewhat around what works well and what is likely to fail. Importantly, that change allows us to drastically improve our deployment time if we have lots of virtual machines.

I’d previously found that unless I nested the extensions for a VM within the JSON of the virtual machine itself, I got lots of random deployment errors. I am happy to now report that situation has improved. The result of that improvement is that we can now separate out the extensions deployed to a virtual machines from the machine itself. That separates the configuration of the VM, which for complex environments almost certainly has a prescribed sequence, from the deployment of  the VM, which almost certainly doesn’t.

To give you a tacit example, in the latest work at Black Marble we are deploying a multi-server environment (DC, ADFS, WAP, SQL, BizTalk, Service Bus and two IIS servers) where we deploy the VMs and configure them. With my original approach, hard-fought to achieve a reliable deploy, each VM was pushed and fully configured in the necessary sequence, domain controller first.

With our new approach we can deploy all eight VMs in that environment simultaneously. We have moved our DSC and Custom Script extensions into separate resource templates and that has allowed some clever sequencing to drastically shorten the time to deploy the environment (currently around fifty minutes!).

We did this by carefully looking at what each step was doing and really focusing on the dependencies:

  • The domain controller VM created a new virtual machine. The DSC extension then installed domain services and certificate services and created the domain. The custom script then created some certificated.
  • The ADFS VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured ADFS.
  • The WAP VM created a new virtual machine. The DSC extension then joined that server to the domain. The custom script then copied the certificate from the DC and configured the proxy for the configured ADFS service.

Hopefully you can see what we saw: Each machine had three phases of configuration and the dependencies were different, giving us three separate sequences:

  1. The VM creations are completely independent. We could do those in parallel to save time.
  2. The DSC configuration for the DC has to be done first, to create the domain. However, the ADFS and WAP servers have DSC that are independent, so we could do those in parallel too.
  3. The custom script configurations have a definite sequence (DC – ADFS – WAP) and the DC script depends on the DC having run it’s DSC configuration first so we have our certificate services.

Once we’ve identified our work streams it’s a simple matter of declaring the dependencies in our JSON.

Top tip: It’s a good idea to list all the dependencies for each resource. Even though the Azure Resource Manager will infer the dependency chain when it parses the template, it’s much easier for humans to look at a full list in each resource to figure out what’s going on.

The end result of this tinkering? We cut our deployment time in half. The really cool bit is that adding more VMs doesn’t add much time to our deploy as it’s the creation of the virtual machines that tends to take longest.

SharePoint 2013: Creating Managed Metadata Columns that allow Fill-In Choices

This is a relatively quick post. There’s a fair bunch of stuff written about creating columns in SharePoint 2013 that use Managed Metadata termsets. However, some of it is a pain to find and then some. I have had to deal with two frustrating issues lately, both of which boil down to poor sharepoint documentation.

 Wictor Wilén wrote the post I point people at for most stuff on managed metadata columns, but this time the internet couldn’t help.

First of all, I wanted to create a custom column which used a termset to hold data. This is well documented. However, I wanted to allow fill-in choices and could I make that work? My termset was open, my xml column definition looked right, but no fill-in choice. Update the column via the web UI and I could turn fill-in on and off with no problem. In the end, I examined the column with PowerShell before and after the change. It turns out (and this is not the only place they do it) the UI stays the same, but the settings changed in the background are different. For metadata columns the FillInChoice property is ignored – you must add a custom property called Open:

<?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/">   <Field        ID="{b7406e8e-47aa-40ac-a061-5188422a58d6}"        Name="FeatureGroup"        DisplayName="Feature Group"        Type="TaxonomyFieldType"        Required="FALSE"        Group="Nimbus"        ShowField="Term1033"        ShowInEditForm="TRUE"        ShowInNewForm="TRUE"        FillInChoice="TRUE">     <Customization>       <ArrayOfProperty>         <Property>           <Name>TextField</Name>;           <Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">{b7406e8e-47aa-40ac-a061-5188422a58d6}</Value>         </Property>         <Property>           <Name>Open</Name>           <Value xmlns:q5="http://www.w3.org/2001/XMLSchema" p4:type="q5:boolean" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">true</Value>         </Property>         <Property>           <Name>IsPathRendered</Name>           <Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:boolean" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">false</Value>         </Property>       </ArrayOfProperty>     </Customization>   </Field>    <Field     Type="Note"     DisplayName="FeatureGroupTaxHTField0"     StaticName="FeatureGroupTaxHTField0"     Name="FeatureGroupTaxHTField0"     Group="Nimbus"     ID="{164B63F0-3424-4A9B-B6E4-5EC675EF5C75}"     Hidden="TRUE"     DisplaceOnUpgrade="TRUE">   </Field> </Elements>

Whilst we’re on the subject, if you want metadata fields to be correctly indexed by search, the hidden field MUST follow the strict naming convention of <fieldname>TaxHTField0.

When using the content by search web part, what gets indexed by search is the text of the term in the column within the content type. However, if you enter a search query wanting to match the value of the current page or item what gets pushed into the search column is the ID of the term (a GUID), not the text. It is possible to match the GUID against a different search managed property, but that only gets created if you name your hidden field correctly. Hat-tip to Martin Hatch for that one, in an obscure forum post I have not been able to find since.

Automating TFS Build Server deployment with SCVMM and PowerShell

Richard and I have been busy this week. It started with a conversation about automating the installation of new build servers. Richard was looking at writing PowerShell to install and configure the TFS build agent, along with all the various SDKs that we use across all out projects. Our current array of build servers have all been built by hand and each has a different set of SDKs to build specific project types. Richard’s aim is to make a single, homogenous build server configuration so we can then scale out for capacity much more quickly than before.

Enter, stage left, SCVMM. For my part, I’ve been looking at what can be done with VM Templates and, more importantly, service templates. It seemed to me that creating a Build Server service in SCVMM with a standard template would allow us to quickly and easily add and remove servers to the group.

There isn’t much written about the application/script side of SCVMM server templates, so I thought I’d write up my part.

Note: I’m not a System Center specialist. We use Config Manager, Virtual Machine Manager and Data Protection Manager at Black Marble for our own services rather than being a System Center partner.

Dividing up the problem space

Our final template uses a single PowerShell script to perform the configuration and installation work. Yes, I could have created steps in the service template to install each of the items Richard’s script deployed, but we decided against that. The reasoning is relatively simple: It’s much easier to modify the PowerShell script to add, remove or change the stuff that gets deployed. It’s hard to do that with SCVMM, as far as I can tell.

However, during testing I discovered that if I added windows roles and features through the template it was faster than when the various installers Richard called in his script triggered the feature addition.

The division of labour, then, became the following:

SCVMM Tasks:

  • VM Template is created for the target OS. The VM template is configured to automatically join the new machine to our domain and place the machine in the correct OU. It also sets the language correctly. More on that later.
  • Service Template is created for a Build Servers service. It’s a single tier service that has a minimum of one machine and a maximum of twenty (that maximum is a bit on an arbitrary value on my part). The service template adds the roles and features to the machine definition and runs two script application blocks:
    • The first simply runs xcopy to pull the contents of a folder on a share to the local PC. I do this because running a power shell PowerShell script from a network share doesn’t work – in an interactive session you are prompted before the script executes because it’s from an untrusted location and I haven’t worked out how to suppress the prompt yet.
    • The second application executes powershell.exe and feeds in the full path to Richard’s PowerShell script, newly copied onto the local disk.

Step 1: VM Template

There’s a wealth of information about creating VM templates in the internet, so I’m not going to cover this in depth. I did, however, want to pull out a couple of things I discovered along the way.

I installed my base VM with the UK English regional settings. When SCVMM converted that into a template via sysprep, the resulting machine comes up in English US, which is really annoying. Had I been paying attention, I would have noticed that we already had an unattend.xml file to correct this, which I could have referenced in the VM template settings. However, I found a much more interesting way to address the issue (which of course led me down another rabbit hole).

A bit of research led me to a very interesting post by Gunter Danzeisen. In it he shows how to use powershell to modify an unattendsettings property of the VM template within System Center. This is at the same time both irritating and enlightening.

It’s irritating, because I am truly fed up of ‘hidden’ functionality in products that causes me pain. The VM Template clearly allows me to specify an unattend.xml file, so why have an internal one as an object property. Moreover, why not simply document it’s existence and let me modify that property – why do I need two different methods which then makes me constantly wonder which gets priority.

It’s enlightening, because I can modify that property really easily – it’s simply a collection of name/value pairs that marry against the unattend.xml settings.

There is a bit of snag with this approach, however, which I’ll come onto in a little while.

Anyway, back to the plot. I followed Gunter’s advice and used PowerShell to set the language values of the internal unattend. I then decided to use the same approach to see if I could add other settings – specifically the destination OU for the server when added to AD.

The PowerShell for the region settings is below:

$template = Get-SCVMtemplate | where {$_.Name -eq "My VM Template"}  $settings = $template.UnattendSettings;  $settings.add("oobeSystem/Microsoft-Windows-International-Core/UserLocale","en-GB");  $settings.add("oobeSystem/Microsoft-Windows-International-Core/SystemLocale","en-GB");  $settings.add("oobeSystem/Microsoft-Windows-International-Core/UILanguage","en-GB");  $settings.add("oobeSystem/Microsoft-Windows-International-Core/InputLocale","0809:00000809");  Set-SCVMTemplate -VMTemplate $template -UnattendSettings $settings

For reference, removing a setting is easy – simply reference the name of the setting when calling the remove method:

$settings.remove("oobeSystem/Microsoft-Windows-International-Core/UserLocale"); 

I then set the destination OU with the following setting:

$settings.add("specialize/Microsoft-Windows-UnattendedJoin/Identification/MachineObjectOU","OU=FileServers,DC=mydomain,DC=local"); 

The Snag

There is a problem with this approach. If you use an unattend.xml file, you can override that setting when you add the VM template to your service template. However, whilst I could find the unattendsettings property of the VM when referenced by the template, I couldn’t modify it.

If we access the Service Template object with:

$svctemplate = Get-SCServiceTemplate | where {$_.name -eq "TFS Build Service"}

We get an object that contains one or more ComputerTierTemplates (depending on how many tiers you gave your service). Each of those has a VMTemplate object that holds the information from our original VMTemplate, and therefor has our unattendsettings.

$svctemplate.ComputerTierTemplates[0].VMTemplate.UnattendSettings

So, we can grab those settings and modify them. Great. The trouble is, I haven’t found a way to update the stored configuration. Set-SCServiceTemplate doesn’t let me stuff the settings back in the same was as Set-SCVMTemplate does, and you can’t use the latter with a reference to the VMTemplate child of our template.

Now, I decided that I would create a copy of my original VM template just for Build Servers, so I could set a different target OU for the servers. In hindsight, I’m not sure whether this is better than overlaying unattend.xml files, and I haven’t experimented with how the unattend.xml might interact with the unattendsettings yet either. If you try, please let me know how you get on.

Step 2: Service Template

Once I’d got my VM Template sorted, the next step was to create a service. There’s a pretty nice design surface for these that allows you to pick a ‘starter’ template with the right number of tiers, although it’s dead easy to add a new tier.

I started with the Single Machine template, which gave me a single tier. You then need to drag a VM template from a list of available templates onto the tier. The screenshot below shows my single tier Service. The VM template has a single NIC and is configured to connect to my Black Marble network.

image

The light blue border on the large box (the service tier) indicates it’s selected. That will show the tier properties at the bottom of the design window. In here I have set a minimum and maximum number of servers that can be deployed in the tier.

image

Notice also the availability set option – if I needed to ensure that VMs in this service were spread across multiple hosts for resilience I could tick this option. I don’t care where build servers get deployed (they go onto our Lab VM hosts and are effectively ‘disposable’) so I have left this alone.

Open the properties of the tier (right-click or choose View All Properties in the property pan) and a dialog opens with machine properties. In here I have configured the roles and features for the build server (I deliberately haven’t set these in the VM Template so I can have fewer, more general VM templates).

image

Also in here are the Application Configuration settings that cause the VM to run Richard’s PowerShell. The first is a simple one that references cmd.exe to run xcopy. All the settings on this are default.

image

The second app runs Powershell.exe and passes in a file parameter. This was a source of much frustration – I wanted to use the –ExecutionPolicy parameter to ensure the script ran successfully but if I added this (as the first parameter, –File has to be the last one) the whole command failed. As it happens I set the execution policy in Group Policy for all the Build Servers but I like the belt and braces approach.

The biggest point here is the timeout setting. Richard’s script can take an hour or so to run, so the timeout is a BIG number. The first few times I deployed the script task failed because of this, although in reality the script itself was still running happily and completed fine.

image

I have changed the advanced settings for this script, though. To make debugging a little easier (the VM is a black box whilst deploying, so it’s tricky to see what’s going on) I have directed standard output and standard errors to a file. I’ve also turned off the options to automatically ‘detect’ failures through watching output, error and exit codes. Richard’s script can be run repeatedly with no ill-effect, so I’ve left the restart option to restart the script. That means that if I restart the deployment job from SCVMM if it fails, the script will be run.

image

Once the service template is created, I deployed the service with a single server. We then added a second server by scaling out the tier.

image

We can do this via the SCVMM console, or using the virtualmachinemanager PowerShell module:

$serviceInstance = Get-SCService -Name "TFS Build Servers" $computerTier = Get-SCComputerTier -Service $serviceInstance | where { $_.Name -eq "TFS Build Server" } New-SCVirtualMachine -ComputerTier $computerTier -Name "Build03" -Description "" -ReturnImmediately -ComputerName "Build03" -StartAction "NeverAutoTurnOnVM" -StopAction "SaveVM"

Lessons Learned

It’s been an interesting process overall. I think we’ve made the right choice in using a single, easily modifiable powershell script to do the heavy lifting here. Yes, I could have created Application Profiles in SCVMM for each of the items Richard installed, but it would have been harder to make changes (and things like the Azure SDK are updated faster than I can blink!).

I’m still considering whether my choice of adding the domain location to the unattendsettings in the VM template object was a good choice or not. I’m happy that the language settings should go in there – we never change those. I need to experiment with how adding an unattend.xml file affects the settings in the template object.

Service Templates are a great way to go. When you think about it, most of our IT systems tend to be service-focused. Using service templates for things like SharePoint or CRM are a no brainer, but also things like web servers, where we have a number of relatively heterogeneous VMs that host internal web services or sites. Services in SCVMM collect those VMs into manageable groups that can be easily spread across multiple hosts for resilience if required. It’s also much easier to find VMs in services than in a very long list of hosts!

In terms of futures, I’m interested in where Desired State Configuration will take us. Crafting the necessary elements for this project would be extremely complicated with DSC right now, but when all the ducks are in order it should make life much, much easier, and DSC is certainly on my learning list.

Safely modify SharePoint 2013 Web.Config files using PowerShell

One of the things we learn early in our SharePoint careers was not to manually edit the web.config files of a web application. SharePoint involves multiple servers and has its own mechanisms for managing web.config updates.

Previously, I’ve created xml files with web.config modifications and copied those to each WFE. Those changes are merged into the initial web.config by SharePoint.

I’ve always been vaguely aware of there being a better way, but never needed to track it down from an IT point of view. Last week, however we wanted to change a setting to enable blobcache on the servers hosting a particular web application so decided to use the opportunity to figure out a ‘best way’ to do this.

Enter, stage left, the SPWebConfigModification class (note, that link is to SharePoint 2010, but SP2013 works the same way).

We can create a collection of configuration changes that get applied by SharePoint via a timer job. That collection is generated through code and is persistent – we can add and remove changes over time, but they are stored in the farm config and will get applied each time we add a new server or update the web application IIS files.

A search of the web turned up an article by Ingo Karstein that had the right approach but brute forced everything by referencing SharePoint DLLs directly. A bit of experimentation showed that we didn’t need to do this – SharePoint PowerShell has everything we need.

Sample code is below. The code will first enumerate the collection of SPWebConfigurationModifications for our web application and remove any that have the same owner value as our script uses. It then adds a new modification to set the value of an attribute (the blobcache ‘enabled’ setting) to true. More modifications can be added into the script and these will be added to the collection. The mods are applied by SharePoint in sequence.

It needs some tidying but it works well. Read the documentation on how the modifications work carefully – it’s possible to work with an element or an attribute and you can add and remove stuff. Remember that other solutions may be adding modifications as well – make sure you don’t remove those.

As always, this stuff is provided ‘as is’ and I am not responsible for the damage you can wreak on your SharePoint farm. Test first and make sure you understand what this code does before using on production.

# Load SharePoint PowerShell PSSnapIn and the main SharePoint .net library Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue  #set a few variables for the script $owner = "NimbusPublishingModifications" $webappurl = "https://share2013.proj.local/"  #Get the web application we want to work with $webapp = get-spwebapplication $webappurl  #get the Foundation Web Application Service (the one that puts the content web apps on servers) $farmservices = $webapp.Farm.Services | where { $_.TypeName -eq "Microsoft SharePoint Foundation Web Application" }  #get the list of existing web config modifications for our web app $existingModifications = @(); $webapp.WebConfigModifications | where-object { $_.Owner -eq $owner } | foreach-object { $existingModifications = $existingModifications + $_} #remove any modofications that match our owner value (i.e. strip out our old mods before we re-add them) $existingModifications | foreach-object{ $webapp.WebConfigModifications.Remove($_) }  #create a new web config modification $newModification = new-object "Microsoft.SharePoint.Administration.SPWebConfigModification" $newModification.Path = "configuration/SharePoint/BlobCache" $newModification.Name = "enabled" $newModification.Sequence = 0 $newModification.Owner = $owner $newModification.Type = 1           #for the enum value "SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode" $newModification.Value = "true"  #add our web config modification to the stack of mods that are applied $webapp.WebConfigModifications.Add($newModification) $webapp.Update()  #trigger the process of rebuildig the web config files on content web applications $farmServices.ApplyWebConfigModifications()

Adding USB 3 to my Lenovo X220 Tablet

My X220 is a stalwart machine. It’s built like a tank and can be upgraded in a numb of ways. Mine now has 16Gb of RAM and two SSDs which allow me to run multi-VM environments for development and demo. Unfortunately, however, there is no USB 3 on the laptop. That’s a pain if I need to copy stuff on and off via USB, or run VMs from a USB 3 pod.

I’ve tried adding USB 3 before – I bough a Startech ExpressCard 54 with two ports. That singularly failed to work – the card is detected but the system either fails to recognise connected devices, or sees them yet can’t access them.

Whilst at Build I was involved in a conversation with the Kinect product team. They said that when using the Kinect with Windows they had issues with USB 3, and it boiled down to chipset issues, where the manufacturer hadn’t implemented something quite in accordance with spec. This spurred me to look for another ExpressCard, carefully looking for a different chipset.

Enter, stage left, Targus. not a name I’d associate with this kind of periperal, but careful reading of their specs showed their card to be a totally different chipset from the Startech device.

I ordered mine from Amazon. It arrived the next day, plugged in and simply worked. It’s an ExpressCard 34, so only one port. However, it puts out enough power to run my Western Digital USB 3 pod without needing to use the included cable to draw additional power from another USB port. I get the same transfer speed from the disk as my colleagues USB 3-equipped W540 laptops, so I really can’t argue.

The one thing I did add was a 34-54 adapter (from Startech, ironically) to plug the hole left in my ExpressCard 54 slot.

With the current move to ultrabooks I can see no real replacement device for my X220 – a sealed unit with no more than 8Gb and a single drive doesn’t come close to my needs, and a 15” luggable workstation is just too heavy. Hopefully I can keep tweaking the X220 for a while yet.

Enabling Data Deduplication on my Windows 8.1 Laptop

Lets get the disclaimer out of the way first: What I’ve done is absolutely unsupported by Microsoft. Just because it works for me does not guarantee it will work for you and I am not in any way recommending that you follow my lead!

I use a great many virtual machines for both customer work, internal projects and just tinkering. My ThinkPad X220T is tricked out with extra RAM and two SSDs. Space is still an issue, though, and I can’t squeeze any more storage into my little workhorse.

Windows Server 2012 introduced Data Deduplication – a fantastic feature that is saving us huge amounts of disk space on our SCVMM library. I’d love to be able to use that on Windows 8.1 Sadly, Microsoft didn’t see fit to enable the feature.

There are a good many people out there who thought like I do and some of them decided to to figure out how to get data deduplication working on Windows 8.1. I’m not going to repeat those instructions here – I will instead post a link to the best of the articles I read before taking the leap, that of Mike Bijl.

Having installed and enabled the Data Deduplication feature I enabled dedupe on my D drive – a 500Gb Crucial M500 SSD. Note that you cannot dedupe your OS partition – you need OS and data volumes to get anywhere with this process. I started with about 12Gb of free space, gobbled up by ISO files and VHDs of installed VMs. Those dedupe beautifully, and I now have 245Gb of free space.

Have I encountered any problems yet? No. All my VMs run fine. I have a scheduled dedupe job running at noon to keep things tidy that has given no problems so far.

It is important to reiterate Mike’s point, however: If you enable dedupe on a volume and reinstall Windows 8.1 you will not be able to access any data on the drive until you re-enable dedupe (or stick the volume in a Windows Server 2012 or 2012 R2 machine). I’m happy with that – it’s no big deal for me. I would not, however, allow any of our developers to do this on their workstations, for example.

Doing all this, however, has got me thinking… Homegroup support is missing from Server 2012 and 2012 R2. I wonder if the same process might be used to enable features in the opposite direction…?

Gary Lapointe to the rescue: Using his Office 365 powershell tools to recover from a corrupted masterpage

I also need to give credit to the Office 365 support team over this. They were very quick in their response to my support incident, but I was quicker!

Whilst working on an Office 365 site for a customer today I had a moment of blind panic. The site is using custom branding and I was uploading a new version of the master page to the site when things went badly wrong. The upload appeared to finish OK but the dialog that was shown post upload was not the usual content type/fill in the fields form, but a plain white box. I left it for a few minutes but nothing changed. Unperturbed, I returned to the mater page gallery… Except I couldn’t. All I got was a white page. No errors, nothing. No pages worked at all – no settings pages, no content pages, nothing at all.

After some screaming, I tried SharePoint designer. Unfortunately, this was disabled (it is by default) and I couldn’t reach the settings page to enable it. I logged a support call and then suddenly remembered a recent post from Gary Lapointe about a release of some powershell tools for Office 365.

Those tools saved my life. I connected to the Office 365 system with :

Connect-SPOSite -Credential "<my O365 username>" -url "<my sharepoint online url>"

Success!

First of all I used set-spoweb to set the masterurl and custommasterurl properties of the failed site. That allow me back into the system (phew!):

Set-SPOWeb -Identity "/" -CustomMasterUrl "/_catalogs/masterpage/seattle.master"

Once in, I thought all was well, but I could only access content pages. Every time I tried to access the masterpages libary or one of the site settings pages I got an error, even using Seattle.master.

Fortunately, Gary also has a command that will upload a file to a library, so I attempted to overwrite my corrupted masterpage:

New-SPOFile -List "https://<my sharepoint online>.sharepoint.com/_catalogs/masterpage" -Web "/" -File "<local path to my master page file>" –Overwrite

Once I’d done that, everthing snapped back into life.

The moral of the story? Keep calm and always have PowerShell ISE open!

You can download Gary’s tools here and instructions on their use are here.

Big thanks, Gary!

Declaratively create Composed Looks in SharePoint 2013 with elements.xml

This is really a follow-up to my earlier post about tips with SharePoint publishing customisations. Composed looks have been a part of a couple of projects recently. In the first, a solution for on-premise, we used code in a feature receiver to add a number of items to the Composed Looks list. In the second, for Office 365, a bit of research offered an alternative approach with no code.

What are Composed Looks

A composed look is a collection of master page, colour scheme file, font scheme file and background image. There is a site list called Composed Looks that holds them, and they are shown in the Change the Look page as the thumbnail options you can choose to apply branding in one hit.

In order to get your new composed look working there are a few gotchas you need to know:

  1. When you specify a master page in your composed look, there must be a valid .preview file with the same name. This file defines the thumbnail image – if you look at an existing file (such as seattle.preview or olso.preview) you will find html and styling rules, along with some clever token replacement that references colours in the color scheme file.
  2. A composed look must have a master page and colour scheme (.spcolor) file, but font scheme and background image are optional.
  3. When using sites and site collections, files are split between local and root gallery locations:
    1. The Composed look List is local to the site – it doesn’t inherit from the parent site.
    2. Master pages go in the site Master Page Gallery.
    3. Spcolor, sptheme and image files go in the site collection master page gallery.

If any of the files you specify in your composed look don’t exist (or you get the url wrong), the thumbnail won’t display. If any of the files in your composed look are invalid, the thumbnail won’t display. If your master page exists but has no .preview file, the thumbnail won’t display. Diligence is important!

Adding Composed Looks using Elements.xml

In researching whether this was indeed possible, I came across an article by Tom Daly. All credit should go to him – I’ve simply tidied up a bit around his work. I already knoew that it was possible to create lists as part of a feature using only the elements.xml, and to place items in that new list. I hadn’t realised that adding items to an existing list also works.

In Visual Studio 2013 the process is easy – simply add a new item to your project, and in the Add New Item dialog select Office/SharePoint in the left column and Empty Element in the right. Visual Studio will create the new element with an Elements.xml ready and waiting for you.

To create our composed looks we simply edit that elements.xml file.

First we need to reference our list. As per Tom’s post, we need to add a ListInstance element to our file:

<ListInstance FeatureId="{00000000-0000-0000-0000-000000000000}" TemplateType="124" Title="Composed Looks" Url="_catalogs/design" RootWebOnly="FALSE">
</ListInstance>

That xml points to our existing list, and the url is a relative path so will reference the list in the current site for our feature, which is what we want.

Now we need to add at least one item. To do that we need to add Data and Rows elements to hold however many Row elements we have items:

<ListInstance FeatureId="{00000000-0000-0000-0000-000000000000}" TemplateType="124" Title="Composed Looks" Url="_catalogs/design" RootWebOnly="FALSE">
          <Data>
                    <Rows>
                    </Rows>
          </Data>
</ListInstance>

Then we add the following code for a single composed look:

<Row>
          <Field Name="ContentTypeId">0x0060A82B9F5D2F6A44A6A5723277B06731</Field>
          <Field Name="Title">My Composed Look</Field>
          <Field Name="_ModerationStatus">0</Field>
          <Field Name="FSObjType">0</Field>
          <Field Name="Name">My Composed Look</Field>
          <Field Name="MasterPageUrl">~site/_catalogs/masterpage/MyMasterPage.master, ~site/_catalogs/masterpage/MymasterPage.master</Field>
          <Field Name="ThemeUrl">~sitecollection/_catalogs/theme/15/MyColorTheme.spcolor, ~sitecollection/_catalogs/theme/15/MyColorTheme.spcolor</Field>
          <Field Name="ImageUrl"></Field>
          <Field Name="FontSchemeUrl"></Field>
          <Field Name="DisplayOrder">1</Field>
</Row>

There are two parts to the url fields – before the comma is the path to the file and after the comma is the description shown in the list dialog. I set both to the same, but the description could be something more meaningful if you like.

Note that the master page url uses ~site in the path, whilst the theme url uses ~sitecollection. Both of these will be replaced by SharePoint with the correct paths for the current site or site collection.

Note also that I have only specified master page and colour theme. The other two are optional, and SharePoint will use the default font scheme and no background image, respectively. The colour theme would appear to be mandatory because it is used in generating the thumbnail image in conjunction with the .preview file.

The DisplayOrder field affects where in the list of thumbnails our composed look appears. The out-of-the-box SharePoint themes start at 10 and the current theme is always 0. If more than one item has the same DisplayOrder they are displayed in the same order as in the composed looks list. Since I want my customisations to appear first I usually stick a value of 1 in there.

I have removed a couple of fields from the list that Tom specified, most notably the ID field, which SharePoint will generate a value for and (I believe) should be unique, so better to let it deal with that than potentially muck things up ourselves.

Deploying the Composed Look

Once we’ve created our elements.xml, getting the items deployed to our list is easy – simply create a feature and add that module to it. There are a few things I want to mention here:

  1. Tom suggests that the declarative approach does not create items more than once if a feature is reactivated. I have not found this to be the case – deactivate and reactivate the feature and you will end up with duplicate items. Not terrible, but worth knowing.
  2. You need a site level feature to add items to the composed looks list. As some of the things that list item references are at a site collection level, I suggest the following feature and module structure:
    1. Site Collection Feature
      1. Module: Theme files, containing .spcolor, .spfont and background image files. Deploys to _catalogs/Theme/15 folder.
      2. Module: Stylesheets. Deploys to Style Library/Themable folder or a subfolder thereof.
      3. Module: CSS Images. Deploys to Style Library/Themable folder or a subfolder thereof. Separating images referenced by my CSS is a personal preference as I like tidy VS projects!
      4. If you have web parts or search display templates I would put those in the site collection feature as well.
    2. Site Feature
      1. Module: Master pages. Contains .master and associated .preview files. Deploys to _catalogs/masterpage folder.
      2. Module: Page layouts. Contains .aspx layout files. Deploys to _catalogs/masterpage folder.
      3. Module: Composed Looks: Contains the list items in our elements.xml file. Deploys to Composed Looks list.