Six tips when deploying SharePoint 2013 masterpages, page layouts and display templates

I’ve been hat-swapping again since just before christmas (which explains the lack of Azure IaaS posts I’m afraid). I’ve been working on a large SharePoint 2013 project, most lately on customising a number of elements around publishing. Getting those custom elements into SharePoint from my solution raised a number of little snags, most of which were solved by the great internet hive mind. It took me a long time to find some of those fixes, however, so I thought I’d collect them here and reference the original posts where appropriate.

1. Overwrite existing files reliably

This has long been an old chestnut for as long as I have been working in SharePoint. Your solution deploys a file to the masterpage gallery or style library. You deploy an updated version and none of your changes are visible because SharePoint hasn’t replace the file with your new version. In previous versions careful use of things like ‘ghostable’ in library in the elements.xml when you deployed the file helped – files that are ghostable generally seem to be updated, unless you manually edit the file, thus ‘unghosting’ it.

In SharePoint 2013, however, we appear to have a new property that we can specify in our elements.xml for deployable files, ReplaceContent:

<File Path="myfile.aspx" Url="myfile.aspx" Type="GhostableInLibrary" ReplaceContent="TRUE" />

As far as I can tell, this does what it says on the tin. Overwrites existing files by default.

2. Provision Web Parts into page layouts safely as part of a feature

This is one I’d never personally tried before. I’ve seen many people struggling, pasting web part code into a masterpage or page layout and having problems during deployment. The way to do it (the ‘right’ way as far as I can know) is to use the feature to do it. When you list your masterpage or page layout in the elements.xml you can add a property that deploys a web part, AllUsersWebPart:

<File Path="myfile.aspx" Url="myfile.aspx" Type="GhostableInLibrary" ReplaceContent="TRUE" >
<AllUsersWebPart WebPartZoneID=”TopZone” WebPartOrder=”0”>
<![CDATA[
]]>
</AllUsersWebPart>

Simply specify the name of the web part zone in your page and it will be added during the deploy. The webpartorder setting should allow you to define where it appears. When adding multiple webparts I have had more success setting that to zero for each web part and just getting the order right. As you might have guessed, for multiple web parts, add multiple AllUsersWebPart sections.

But where’s the web part, I hear you cry! In that CDATA block, paste the XML for your web part. Getting that is easy – simply export the web part from SharePoint and paste the resulting XML straight in there. There are a couple of tweaks you may need to apply that I’ll list next.

3. Substitute ~ for ~ in paths within web parts in CDATA blocks

This one stumped me for a while and I was fortunate to come across a post by Chris O’Brien that solved it for me. I was trying to add a custom Content By Search web part to a page. That web part had custom control and display templates specified, which reference the current site collection in their path (~sitecollection/_catalogs). The problem is that the tilda gets stripped out by SharePoint when the page is deployed, breaking the setting.

The solution turns out to be one of those typical off the wall ‘I would never have thought of that!’ solutions that crop up all the time with SharePoint: Swap the ~ character for it’s XML entity reference: ~.

<property name="GroupTemplateId" type="string">~sitecollection/_catalogs/masterpage/Display Templates/Content Web Parts/MyTemplate.js</property>

4. Use <value> to include content in the Content Editor web part in CDATA blocks

Export a Content Editor web part and you will see that the HTML content that is displayed within it is in the Content element, wrapped in a CDATA block. The problem is that when deploying this web part into the page using the technique above you can’t nest a CDATA block within a CDATA block.

The solution? Change the CDATA wrapper to be the <value> element. The snag? I have found that I need to swap the < and > symbols for their HTML entity counterparts: &lt; and &gt;.

<Content xmlns="http://schemas.microsoft.com/WebPart/v2/ContentEditor"><value>&lt;h2&gt;?My Content&lt;/h2&gt;</value></Content>

5. Provision Search Display Templates as draft and publish them with a feature receiver

This one is a bit contentious, as far as I can tell. I derived my (simple) approach from an article by Waldek Mastykarz. The crux of the matter is this: You can either edit the HTML part of a search display template or the javascript. The ‘correct’ way is another matter though. If you have publishing features enabled then when you save and publish the HTML file, SharePoint generates the javascript file with a triggered event receiver. If you don’t have publishing enabled, as far as I can tell only the javascript files are there and the event receiver doesn’t appear to be enabled.

So…  which way to jump? Well, in my case I am creating customisations that depend on publishing features, so I decided to deploy just the HTML file and let SharePoint generate the javascript. If I needed to use these things without publishing I may have extracted the javascript from my development sharepoint and deployed that.

The first part to my simple approach is to deploy the files as draft using the options available to me in elements.xml:

<File Path="MyTemplate.html" Url="MyTemplate.html" Type="GhostableInLibrary" Level="Draft" ReplaceContent="TRUE" />

I then use a fairly simple function that is called by the feature receiver on activition, once per file:

public static void CheckInFile(SPWeb web, string fileUrl)
{
    // get the file
    SPFile file = web.GetFile(fileUrl);
    // depending on the settings of the parent document library we may need to check in and/or (publish or approve) the file
    if (file.Level == SPFileLevel.Checkout) file.CheckIn("", SPCheckinType.MajorCheckIn);
    if (file.Level == SPFileLevel.Draft)
    {
        if (file.DocumentLibrary.EnableModeration) file.Approve("");
        else file.Publish("");
    }
}

If you look at the original article, the solution suggested by Waldek is jolly clever, but much cleverer that I needed for a couple of display templates.

6. Make your masterpages appear in ‘Change the look’ with a preview file

In the new SharePoint 2013 world site admins have a great deal of flexibility over how their site looks. I wanted to enable users of my custom masterpages to continue to use the theming engine – selecting their own colours and fonts – but to keep the custom masterpage I had built. Again, it’s actually really easy. Simply deploy a .preview file with the same name as your masterpage (e.g. mymaster.master and mymaster.preview). The .preview is actually a clever combination od setting, html and css that allows you to specify the default colour pallete file (.spcolor) and font file (.spfont) as well as draw a little preview of your page. I was lucky on that last one, as my look was the same as the default, so I simply copied seattle.preview.

I could go a step further in that I can create a Composed Look that would show my layout as a tile in the Change My Look UI, but that involves adding items to a SharePoint list and was more than I needed for this particular project. I will need to do that for my next one, however…

Dealing with AD sync issues in an Azure hybrid deployment

I’ve been building demo environments for Tech.Days Online for the past few days. I had been blogging as I built, but then I hit problems and time pressure meant I had to pause my series on building the hybrid network. I will pick up the remainder of those posts in the near future but in the meantime, I want to give you all the heads up on one of my problems.

When I came to install the directory synchronisation tool on my azure-hosted dirsync server I couldn’t get the configuration wizard to run. It kept telling me that the domain admin user I entered (the domain administrator!) was not a valid domain account. I tried removing the machine from the domain and re-adding it first to no avail, and then started looking at domain replication.

There were no errors in the event log, save for one solitary warning on each DC about time. DCDIAG spat out huge lists of errors on both servers. Repadmin refused to admit that the domain controllers knew about each other.

So, for ref, I was seeing:

  • RPC errors when trying to talk between domain controllers.
  • DNS servers were not synchronising.
  • DCDIAG on one server threw up range of errors:
    • EventID: 0x80000B46 related to kerberos.
    • EventID: 0x00001695 relating to failed registration of DNS addresses in the domain.
    • EventID: 0x00001695 relating to failed registration of DNS addresses in the ForestDNSZones subdomain.
    • EventID: 0x00001695relating to failed registration of DNS addresses in the DomainDNSZones subdomain.
    • EventID: 0x0000168E relating to failed registration of the DNS record _ldap._tcp.ForestDnsZones.<domain>
    • EventID: 0x0000271A relating to a failed DCOM registration.

The solution was to fix time on the domain. I ran through a number of articles on the subject, and at the back of mind was a recollection that time can drift when running in a VM.

In the end I followed the settings in this TechNet article. I set both domain controllers to sync time with an NTP source (time.windows.com) and ten left the whole thing to settle for an hour or so while we went for food. When I came back the domain had settled.

Sadly, my dirsync box was a casualty of the conundrum. Because I removed it from the domain while AD sync was not working I had a broken trust relationship. I removed it for the domain again, rejoined it and everything worked just fine.

As always, hopefully this will save somebody some time and grief when doing the same thing as I was.

Building an Azure IaaS and on-premise hybrid environment Part 2: DC and servers in the cloud

This is part 2 of a series of posts bout building a hybrid network connecting Windows Azure and on-premise. For more background on what the goals are, and for information on how to create the Azure Network and connect the VPN tunnel between on-premise and cloud see part 1.

Creating a DC on our Azure Network

I’m going to create a new VM on Azure using the VM gallery. One important point when doing this is that you should add a second drive to the VM for domain controllers. This is down to how read/write caching works on the primary drive (it’s enabled)  which means there is a risk that a write operation may make it to the cache but not to the drive in the event of a failure. This would cause problems with AD synchronisation and for that reason we add a seond drive and disable caching on it so we can use it to host the AD database.

Before we create the new machine it’s a good idea to create a storage account. If we leave Azure to do it the account gets the usual random name. I prefer order and convention in these things, so I’ll create one myself.

storage 1

When you create a storage account, Azure now creates a container within it named vhds and it uses that to hold the virtual hard disks for your VMs.

We can now create a virtual machine using the VM Gallery.

new vm 1

The Virtual Machine creation wizard will appear and show the numerous VM templates we can start from. I want a Server 2012 R2 DC so I’m going to choose Windows Server 2012 R2 Datacenter from the list.

new vm 2

The next screen allows us to set the VM name. This is also used for the Azure Endpoint and must be unique within Azure. We can also choose a size for the VM from the available Azure VMs. This is a lab so I’m happy with a small VM. In production you would size the VM according to your AD.

We also need to provide a username and password that Azure will configure when it deploys the VM. We’ll use that to connect to the machine in order to join it to the domain.

new vm 3

The next screen asks for a whole bunch of information about where the new VM will be placed and what networks it will be connected to. The wizard does a pretty good job of selecting the right defaults for most settings.

I created two subnets in my virtual network so I could have an internal and external subnets. The DC shouldn’t have connections from outside our network so it’s going on subnet-1.

new vm 4

The final screen allows us to configure the ports that will be available through the Azure endpoints. If we remove these then we will only be able to connect to the new VM via our internal network. That’s exactly what I want, so I will click the big X at the right hand side of each endpoint to remove it.

new vm 5

When we click the final button Azure will show us that our new VM is provisioning.

new vm 6

Once the VM is running you can click on it to view the dashboard. You will see from mine that the new VM has no public IP address and that it has been give an internal IP address of 172.16.1.4 – on the Azure network I created earlier. The first server that you connect to a virtual network subnet in Azure will always get .4 as it’s address; the second gets .5, etc. An important point to note here is that if a virtual machine is deallocated (when you shut it down from the Azure portal it will do this) the DHCP-given IP address is released and another server could get that address. It’s important to be careful about the order you start machines in for this reason.

vm dash

 

I haven’t added a second hard disk to the VM, so that’s our next step. At the bottom of the dashboard there is an Attach button that allows us to add an empty disk to the VM.

attach disk 1

In the screen that appears we can give our new disk a name and size and, importantly, set the type of caching we want on the disk. As I mentioned, everything I have read and heard tells me that caching on the disk holding the AD database should be turned off.

new disk 1

Now we’ve got the second disk attached, the next step is to make an RDP connection to our new server. We can do that from one of the machines on our on-premise network just by entering the ip address of the Azure-hosted server into the Remote Connection dialog.

Remember to use the credentials you set when you created the VM: e.g. azureucdc\builduser

rdp connection 1

The first thing we need to do is bring the additional disk online, create a volume and assign a drive letter. I’ve used S for sysvol.

dc add disk

Next, we need to join the server to our AD domain, which will need a reboot. After that we can add the Active Directory Domain Services role in order to promote the server to be a domain controller. It’s important when doing this to set the paths for the AD databases to the second drive (S in my case)

azure dcpromo

Once we’ve got our new DC and DNS up and running, we should configure our Azure network so it knows the IP address of our new DNS and hands it to other servers in our network.

To do that we register the DNS with Azure first.

azure dns 2

Next we modify the configuration of our Azure virtual network to add the new DNS. The DNS addresses are handed out in the order they are specified in the Azure network, so I’ve removed the on-premise DNS then added first the one hosted in Azure and then the on-premise one.

azure dns 3

We now have a functioning Azure network with services that will support any other machines we host there even if the VPN link goes down.

We’ll need some more VMs for our other services to support our connected Azure ADFS. We’ll deal with those in part 3.

Building an Azure IaaS and on-premise hybrid environment Part 1: The plan and Azure Network Connection

I’ve been meaning to build a test lab to kick the tyres of Windows Azure Networks for a while. Two things combined together to make me get it done, however: First was the need to build exactly that for a customer as part of proof-of-concepts; the second was an invitation to present at Tech.Days Online on the subject.

I’ve built and rebuilt said lab a few times now. I am about to build it again in order to have a demo environment for Tech.Days and I though it would be a good opportunity to blog the steps involved.

I had planned this as one blog post, but it’s going to end up as a series of posts just because of sheer length. In this post I’m going to describe the environment and walk through creating the Azure network and connecting it to my on-premise network.

Constraints

Let’s get the big problem out of the way first: In order to do this you need a static, world-routable IP address. You have to have something at your end that will act as one end of a site-to-site VPN tunnel and that will only work if you have an IP that doesn’t randomly change.

The second issue is the bit of equipment you need to get that VPN working. The easiest is a Windows Server 2012 or 2012 R2 machine, or equipment from Juniper or Cisco which means you should be able to use one of the configuration scripts provided by Microsoft. After that, you’re on your own. I’ve already blogged about getting the connection working with a SonicWall. In theory, anything that supports IKEv2 connections can probably be made to work, but you’ll need to have a good understanding of the technology.

The third issue revolves around whether you are using trial subscriptions of things like Windows Azure and Office 365. I have an MSDN subscription to Azure, so it’s not going to vanish quickly, although I can burn through my MSDN credit quite quickly with this lab. If you are using a trial you need to be aware that you might hit problems. In this post I’m not going to touch Office 365, although I am going to get single sign-on working with Azure AD. The reason for that is I don’t want to connect my Azure AD to an Office 365 trial subscription that will vanish in a few short weeks.

What are we building?

I wanted to build a meaningful test of what Azure Networks could do for me. I turns out that it’s also something that interests a good many of my customers and colleagues in the industry:

  1. Create a Windows Azure virtual network and connect that to my on-premise network.
  2. Place a domain controller into the Azure network.
  3. Build a server in the Azure network to run Dirsync and push my domain users into Azure AD (step one of single sign-on for Office 365 as well)
  4. Build an ADFS server in the Azure network and connect it to Azure AD (step two of single-sign on for Office 365)
  5. Build an ADFS Proxy (or a Server 2012 Web Proxy) for internet-access to the federated sign-on mechanism (also needed for Office 365)

What aren’t we doing?

We’re not going to look at Azure Backup in this post. We already use Azure for backup at Black Marble as part of our DPM configuration. There is also a simpler backup client for Windows Server. I’ll  blog on those separately.

Why would we want to do this?

I have had many conversations about the practicality of moving virtualised servers from an on-premise environment into Azure. For many organisations this can work out cheaper that running their own tin. A hybird approach allows those VMs to me moved across over time and is important to enable testing.

Description of my rig

I am lucky in that I have a small number of static IP addresses through the internet provision at Black Marble. For my previous builds of this lab I added a USB network adapter to my laptop to connect to the outside world. For this build I am using a server hosted at Black Marble. Why? I need both ends of the lab to be running for my Tech.Days talks and it’s very hard to get a static world-routable IP at the event.

Very importantly, however, none of this lab will be connected to my main networks. I will create an internal Hyper-V network for the on-premise end of the lab and a second Hyper-V network that will connect to the outside world. The Hyper-V host will not be connected to any of these networks.

On my laptop I have Windows 8.1 and built all the VMs as Generation 2 virtual machines. My server, however, runs Server 2012 so the VMs are Generation 1. Does this matter? Not a bit, but I like the simplicity of the generation 2 virtual machines more than generation 1.

On-Premise VMs

For my on-premise network I have two virtual machines:

PremUCDC

Role: Domain Controller, DNS
OS: Windows Server 2012 R2
CPU: 2 core
RAM: Dynamic, min 512MB, max 2048MB
Disk: 120Gb Dynamic
Network: I adapter on internal network

Once the OS has been installed I will add the Active Directory Domain Services role and promote the server to be a domain controller.

For this lab I am using an internet-registered domain and I will use that as the AD domain suffix. This is an important point: If you create an AD forest with a .local suffix then you will have to jump through some more hoops to get single sign-on with Azure AD working.

PremUCRRAS

Role: Remote Access/VPN
OS: Windows Server 2012
CPU: 2 core
RAM: Dynamic, min 512MB, max 2048MB.
Networking: I adapter on internal network; I adapter on internet-facing network

I’ve tried Server 2012 R2 for the RRAS server a couple of times and I have not yet managed to get the VPN connection working, so this lab will use Server 2012.

I won’t add any roles to this server. Once I’ve configured my Azure network I will use the configuration script from Microsoft to add the necessary roles and configure the VPN link.

My on-premise network will use the 192.168.0.0 non-routable address space. I will then use 172.16.0.0 for my Azure network address space.

Azure Network

We need to create a new network in Azure to hold our VMs and we need to tell Azure what our on-premise network looks like, and what DNS servers we already have.

We start by creating a new Local Network in Windows Azure:

new internal network

We will be asked what to call the new network and we are also asked to provide the IP address of the machine at the on-premise end of the VPN tunnel.

internal network 1

Next, we are asked to specify what address spaces we have on our on-premise network. This is important as it will be used for routing traffic between Azure and our on-premise networks.

As I said earlier, my on-premise network uses the 192.168 address space. I configure the Azure local network as 192.168.0.0/16. I can still subnet that network down on-premise (and I will).

internal network 2

Now we’ve defined what our on-premise network looks like we need to register our DNS servers. We need to do this so that Azure can hand them out via DHCP and machines on the Azure network will then be able to communicate with our on-premise network systems.

new dns

Our on-premise network has a single DNS – PremUCDC, 192.168.1.1

new dns 2

Now we can define our Azure network

new internal network

First we give it a name, and we need to associate it with an affinity group.

azure network 1

Next, we associate the new network with the DNS server we added earlier

azure network 2

We know we want to connect our on premise network so next we select the Configure site-to-site VPN option. Azure will helpfully add the local network we configured earlier.

azure network 3

The next step is to define our address range and subnets on Azure. I’ve chosen to create 172.16.0.0/24 so I can subnet that down. I want two subnets – 172.16.1.0/24 and 172.16.2.0/24. I’ll add the gateway subnet in a moment.

azure network 4

The gateway subnet is that one that has the internal IP address of the VPN endpoint in Azure. I want that to be 172.16.254.0/24.

azure network 5

The network will take a little while to create. Once it’s done we need to create a Gateway that will provide our VPN connection.

The dashboard for the new network will look something like the image below and we can click the button at the bottom to create our gateway. There are two options for the gateway – static or dynamic routing. Four the lab I’m using dynamic routing.

gateway 1

Creating a gateway can take up to fifteen minutes so this is a good time for a coffee break.

Once the gateway is provisioned your dashboard will update to show you the IP address of the Azure end of the VPN and metrics for data in and out. I’ve removed the last two octets of my gateway address in the screenshot.

gateway 2

To bring up the connection to on-premise we need to configure our RRAS server. Conveniently, we can click the Download VPN Device Script link to grab some powershell to do that for us.

The VPN scripts are also available for a range of Cisco and Juniper devices. Simply select the appropriate options in the menus and download the script you need. You will need to edit the scripts to set a number of parameters before running them.

gateway 3

Before we make those script changes we need to know the shared secret that is used in the VPN handshaking. To find that, click the Manage Key button at the bottom of the dashboard.

The shared key will be displayed. Copy it to clipboard and save it somewhere.

gateway 4

Next, we need to edit the powershell script to add the parameters.

As a side note, why the guys at Microsoft didn’t put a variable block at the top of the script to make this easier I don’t know. Search and replace it is, then…

vpnscript 1

There are a number of edits we need to make:

  1. Replace <SP_AzureGatewayIpAddress> with the large IP address that’s on the Azure network dashboard. This applies to lines 75, 80, 81 and 87.
  2. Replace <SP_AzureVnetNetworkCIDR> with the Azure network address range (172.16.0.0/16 in my lab)
  3. Replace <SP_PresharedKey> with the key we access through Manage Key.

I usually leave line 87 commented out and run the command manually after the configuration is complete.

Copy that script onto the VM that’s going to be the RRAS server. I’m assuming that you’ve already joined it to the domain and configured the network adapters for internal and internet connections.

In order to run the script we will need to modify the execution policy on the server to allow unsigned scripts.

The command for this is set-executionpolicy unrestricted

Now run the powershell script to configure the VPN connection. The script will add the necessary roles to the server and configure the vpn connection.

vpnscript 2

Once the script is run, assuming you don’t need to restart the server it’s time to bring up the connection.

We need to enable the Azure gateway first by clicking the Connect button on the dashboard. Once that’s done, execute the last line of the powershell script (connect-s2svpninterface …).

We should be able to see the state of the connection from both the RRAS server and the Azure dashboard. On the RRAS server, open the Routing and Remote Access console.

The VPN is listed in Network Interfaces and the connection status should be Connected.

vpn state 1

The Azure Network dashboard should also show that the connection has been made.

vpn state 2

 

 

Next steps…

Now we’ve got our VPN connected the next step is to create a new VM on Azure, join it to our domain and promote it to be a DC and DNS server.

Once that’s done there are more VMs to create in order to configure single sign-on with Azure AD, which also needs to be configured to use the internet-registered DNS domain.

With the VMs in place we can configure directory synchronisation and then configure ADFS for single-sign on.

Each of these will be covered in later articles in this series.

Connecting Azure Network Site-Site VPN to a SonicWall Appliance

I am with a customer this week, building a test Azure Network+IaaS/Azure AD/Office 365 environment. We struggled to get the site-site VPN connection up for a while and there wasn’t a great deal on the greater internet to help, save for a couple of posts in a discussion forum by the marvellous Marcus Robinson. We finally got it working when we found a tech note from SonicWall, published just a few days ago on the 7th October.

It turns out that we had created a gateway on Azure that used dynamic routing (I had a working lab environment using Server 2012 RRAS done that way). In SonicWall terms, that is not a site-site VPN and as we had configured appliance for one of those were completely adrift. When we deleted the Azure gateway and created a static routing one everything worked.

For anyone embarking down this road with a SonicWall device I can report that when we followed the instructions everything appeared to connect just fine. The tech note is available on the SonicWall site for all to enjoy.

Miracast with Surface Pro, Windows 8.1 release and Netgear Push2TV

One of the most useful features of Windows 8.1 for me is the native support for Miracast (which is compatible with Intel Widi) for connecting to a wireless projector or display. Being able to wander around with my tablet whilst speaking is really handy.

Sadly, whilst this worked for a little while during the preview, everything stopped with no reason. Searching the internet hive mind suggested that a Windows Defender update during the preview release had borked it, but nobody could confirm.

When the release media arrived on MSDN I upgraded my Surface Pro. Sadly, no joy with the Miracast feature. However, a new firmware update has been release by Microsoft (see  Mr Thurrott for details) and that has fixed the issue. I suspect it’s actually a set of updated display drivers, as a connection could always be be made to the device but nothing would show on screen.

The Push2TV is a great little device – it’s tiny (a couple of inches long, about an inch wide and less than half an inch deep) and will draw power from a USB port on the TV or projector. I got it for testing but I’d really like to be able to use it at our events. The universality of Miracast support in Windows 8.1 might just let me do that.

This isn’t our first rodeo, however. Thanks to a recommendation from Messrs May and Fryer I also have a Belkin Screencast. I couldn’t get that working during the preview of Windows 8.1 at all. I will test that when I get some time. I personally prefer the Netgear, but the Belkin isn’t a bad device. It’s bigger and has a separate PSU, but the big difference for me is that the Belkin insists on fiddling with firmware updates via the Widi connection and it’s a bit of a pain, frankly. The Netgear is a much friendlier, manual update over normal wifi.

Take care installing firmware updates on your Surface Pro if it’s bitlocker encrypted

A quick tip, this one. I downloaded the latest firmware update to my Surface Pro this evening. It rebooted and promptly requested my bitlocker unlock code. I don’t keep those to hand – they’re stored in our Active Directory. Fortunately I had another laptop with DirectAccess so I could find the key. Be ready with your recovery key if you too have enabled bitlocker and perform firmware updates.

Installing .Net 3.5 onto Windows 8 and 8.1 using DISM

This is one of those posts to save me searching the web every time I need to install .Net 3.5 on a Windows 8 (and now 8.1) system. If the automated installation via add/remove features fails then you need the correct DISM command.

For those who have not yet encountered it, DISM allows you to perform actions on Windows image files in a process called Offline Servicing. However, it also allows you to perform the same functions online – on your current windows system.

There is a handy TechNet post on the various ways of installing .Net 3.5 on Windows 8. It’s a useful reference.

For those, like me, who just want the quick steps:

  • Grab your Windows 8 media – USB stick, mounted ISO or DVD.
  • Open an Administrator-level command prompt.
  • Type: Dism /online /enable-feature /featurename:NetFx3 /All /LimitAccess /Source:x:\sources\sxs Where x is the drive letter of your source media.
  • Watch the installation progress. Job done.

Editing Windows Server 2012 Group Policies for Direct Access with Windows 8.1 Enterprise Preview

I finally got time to upgrade my Surface Pro to Windows 8.1 Enterprise. One of the things I most want to test is DirectAccess, as I live and die by this on my main laptop. However, despite the computer object for my machine being in the group that the DA group policies are applied to, no DA settings appeared.

TIP: On Windows 8.1, use Get-DAClientExperienceConfiguration in a PowerShell window to check your settings.

It turned out the policy wasn’t being applied because of the default Windows Server 2012 option of creating a WMI filter to only apply the Direct Access group policy to laptops. That filter had a bunch of Windows version statements in it.

To fix:

Open the Group Policy Management tool (on your DC or laptop with remote admin tools installed).

Find the group policy object “DirectAccess Client Settings”

At the bottom of the policy is WMI Filtering. You will see a filter called “DirectAccess – Laptop only WMI Filter”

Click the button to the right to open the filter. You should see something like the panel below. Click Edit Filter
image

Select the second entry. Click Edit.
image

 

The original filter text is:

Select * from Win32_OperatingSystem WHERE (ProductType = 3) OR (Version LIKE '6.2%' AND (OperatingSystemSKU = 4 OR OperatingSystemSKU = 27 OR OperatingSystemSKU = 72 OR OperatingSystemSKU = 84)) OR (Version LIKE '6.1%' AND (OperatingSystemSKU = 4 OR OperatingSystemSKU = 27 OR OperatingSystemSKU = 70 OR OperatingSystemSKU = 1 OR OperatingSystemSKU = 28 OR OperatingSystemSKU = 71))

Windows 8.1 is version 6.3.x, so you need to change the filter toread as follows (edits highlighted in red):

Select * from Win32_OperatingSystem WHERE (ProductType = 3) OR ((Version LIKE '6.2%' OR Version LIKE '6.3%') AND (OperatingSystemSKU = 4 OR OperatingSystemSKU = 27 OR OperatingSystemSKU = 72 OR OperatingSystemSKU = 84)) OR (Version LIKE '6.1%' AND (OperatingSystemSKU = 4 OR OperatingSystemSKU = 27 OR OperatingSystemSKU = 70 OR OperatingSystemSKU = 1 OR OperatingSystemSKU = 28 OR OperatingSystemSKU = 71))

 

Give AD a few minutes to catch up then run gpupdate /force in a command prompt on your laptop. If you run the powershell again, you should see a full complement of DA settings. The network panel takes a few minutes to catch up, but you should soon see your DirectAccess connection listed.

Installing Windows 8.1 Enterprise on Surface Pro

Windows 8.1 Enterprise preview was released a week or two ago. Being on holiday prevented me trying it out until I returned to the office. Everyone has different methods for installing Windows 8/8.1 on a Surface Pro. It’s actually pretty simple. Windows 8 can be done in the same way as I list here. However, you will need to download the Surface Pro Driver pack from Microsoft – Windows 8 doesn’t automatically find all the hardware; Windows 8.1 does.

The first thing you need is a set of USB installation media that the Surface can read. Sadly, the Windows 7 ISO utility form Microsoft doesn’t create UEFI-bootable media. Enter stage left Rufus – a magnificent tool!

Grab your downloaded ISO file, find a nice fast USB3 drive that’s at least 4Gb in size and start the tool. Use the settings as in the screenshot, below. Select your ISO and hit go.

image

Once you’ve got your media you need to boot your Surface Pro from it. There are different notes on the internet about this. Some tell you to boot the machine whilst holding down the volume up button to enter the BIOS and change the secure boot options.

You don’t need to do this.

Instead, with your Pro switched off, plug in your USB drive. Hold down the volume down button and press the power button. keep the volume down button held down until you see the Surface start to boot from your USB setup volume. That’s all there is to it.

Once your Surface Pro has started setup you should be on familiar ground. Choose to do a full install, not an upgrade. However, when setup shows you a long list of partitions and asks where to install Windows, pause.

You can scrub the drive and install clean. If you do that, you lose all the nice original install of Windows 8 that you can fall back to when you stuff your machine. If you just install to the OS partition, you can use Windows’ really nice refresh my PC function to restore the original factory image.

If you want to install clean, go ahead. If, like me, you want to be more gentle, select Drive 0 partition 4. On my Surface Pro it was around 110.2Gb. Select the option to format the partition and then choose that for your installation.

After that, setup will chug for a few minutes, your Surface Pro will reboot and presto! A new Windows 8.1 install.

It too a reboot or two for all the devices to populate on my Pro, but at no point did I need to hunt down drivers. It all just works! Lovely.

Next stop, domain join to my domain and then bitlocker the hard drive and check out DirectAccess!