Andy Dawson's Blog

The blog of Andy dawson

DPM 2012 R2 SQL Backups Failing After November 2014 OS Updates

Following the installation of the November 2014 OS updates and the updated DPM agent (to update rollup 4) onto a number of servers, I saw failures when attempting to perform a backup of SQL Server 2012 databases hosted on Windows Server 2012. These manifested as ‘replica is inconsistent’ in the DPM console:

DB Replica Inconsistent

On the affected SQL Server, the following error was appearing in the server event log each time a consistency check was triggered:

Event 4354 - Failed to fire the RequestWriterInfo method

The COM+ Event System failed to fire the RequestWriterInfo method on event class {FAF53CC4-BD73-4E36-83F1-2B23F46E513E} for publisher  and subscriber . The display name of the subscription is "{E6057DCA-9DE3-42FC-9A6E-A057156277B4}". The HRESULT was 80042318.

I tried a number of things to resolve the issue:

  1. Modified the protection group. Initially, the protection group wizard did not show the ‘SQL Server’ option and I could not re-add the SQL Server databases. The agent required an update (again) and following this, I could then re-add the SQL Server databases, however when synchronised, the databases were still reporting ‘replica is inconsistent’.
  2. Removed the protection from the affected server, removed the DPM agent, rebooted, then reinstalled and attached the DPM agent and re-added the SQL Server databases to the protection group. This also had no effect, with the databases still reporting as ‘replica is inconsistent’.
  3. Updated the affected server to SQL Sever 2012 SP2 CU2 as the cumulative update mentioned VSS related fixes. Again this had no effect.

I have a Microsoft support call open at the moment in an effort to determine the root cause of the issue, but have found that removing a specific update appears to resolve the issue. The update in question is KB3000853. KB3000853 is an update rollup for Windows 8, Windows RT and Windows Server 2012 and contains a number of fixes including KB2996928 which is a backup related hotfix that references VSS. My suspicion is that this update is what has caused the issue, but will update this post when I get a final resolution from Microsoft support.

Update:

I've had this confirmed as an issue with KB3000853 by the Microsoft engineer I've been working with.
The workaround for the moment is to either change the account that runs the SQL VSS Writer service to the domain admin account (I assume an account that has local admin permissions, but have not tested this so far), or remove KB3000853, at which point the backups start functioning correctly again.
There is currently no confirmed release date for an updated version of KB3000853.

Update 2:

There’s now a revised hotfix to correct the issue experienced above. The steps to correct the issue are:

  1. Install KB3000853.
  2. Install hotfix KB2996928.

This combination corrects the issues on the servers on which I have installed them.

Configuring SharePoint 2013 Apps and Multiple Web Applications on SSL with a Single IP Address

Background

Traditionally the approach to multiple SSL IIS websites hosted on a server involved multiple sites each with its own certificate bound to a single IP address/port combination. If you didn’t mind using non standard SSL ports, then you could use a single IP address on the server, but the experience was not necessarily pleasant for the end user. Assuming you wanted to use the standard SSL port (443), the servers in the farm could potentially consume large numbers of IP addresses, especially is using large numbers of websites and large numbers of web front end servers. This approach also carried over to SharePoint where multiple SSL web applications were to be provisioned.

Using a wildcard SSL certificate allows multiple SSL web applications (IIS websites) to be bound to the same IP address/port combination as long as host headers are used to allow IIS to separate the traffic as it arrives. This could be achieved because IIS uses the same certificate to decrypt traffic no matter what the URL that is being requested (assuming they all conform to the domain named in the wildcard certificate) and the host header then allows IIS to route the traffic appropriately.

With the introduction of SharePoint 2013 apps however, there is a requirement for the use of at least two different SSL certificates on the same server; one (in the case of a wildcard, or more if using individual certificates) for the content web applications and a second for the SharePoint app domain that is to be used (the certificate for the apps domain must be a wildcard certificate). The current recommended configuration is to use a separate apps domain (for example if the main wildcard certificate references *.domain.com, the apps domain should be something along the lines of *.domain-apps.com rather than a subdomain of the main domain as a subdomain could lead to cookie attached on other non-SharePoint web based applications in the same domain space).

For some organisations, the proliferation of IP addresses for the traditional approach to SSL is not an issue. For some organisations however, either the number of IP addresses that they have available is limited, or they wish to reduce the administration overhead involved in the use of multiple IP addresses servers hosting SharePoint. Other scenarios also encourage the use of a single IP address on a server, for example the use of Hyper-V replication, where the system can handle the reassignment of the primary IP address of the server on failover, but additional IP addresses require that some automation be put in place to configure the additional IP addresses upon failover.

Note: The following is not a step-by-step set of instructions for configuring apps; there are a number of good blog posts (e.g. http://sharepointchick.com/archive/2012/07/29/setting-up-your-app-domain-for-sharepoint-2013.aspx) and of course the TechNet documentation at http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx to lead you through the required steps.

SharePoint Apps Requirements

To configure Apps for SharePoint 2013 using a separate domain (rather than a subdomain) for apps, the following requirements must be met:

  • An App domain needs to be determined. If our main domain is ‘contoso.com’, our apps domain could be ‘contosoapps.com’ for example. If SharePoint is available outside the corporate network and apps will be used, the external domain will need to be purchased.
  • An Apps domain DNS zone and wildcard CNAME entry.
  • An Apps domain wildcard certificate.
  • An App Management Service Application and a Subscription Settings Service Application created, configured and running. Note that both of these Service Applications should be in the same proxy group.
  • App settings should be configured in SharePoint.
  • A ‘Listener’ web application with no host header to receive apps traffic.

It is also assumed the the following are already in place:

  • A functional SharePoint 2013 farm.
  • At least one functional content web application configured to use SSL and host header(s).

Infrastructure Configuration

Each App instance is self-contained with a unique URL in order to enforce isolation and prevent cross domain JavaScript calls through the same-origin policy in Web browsers. The format the the App URL is:

App URL

The App domain to be used should be determined based on domains already in use.

Instructions for creating a new DNS zone, the wildcard DNS CNAME entry and a wildcard certificate can be found at http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx. As we’re planning to use a single IP address for all web applications and Apps, point the CNAME wildcard entry at either the load balanced IP address (VIP) in use for the content web applications, or the IP address of the SharePoint server (if you’ve only got one).

Farm Configuration

To be able to use Apps within SharePoint 2013, both the App Management Service Application and the Subscription Settings Service Application need to be created, configured and running and the App prefix and URL need to be configured. Instructions for getting these two Service Applications configured and running are again available at http://technet.microsoft.com/en-us/library/fp161236(v=office.15).aspx.

In addition to the Service Application configuration, a ‘Listener’ web application with no host header is required to allow traffic for SharePoint Apps to be routed correctly. Without the ‘Listener’ web application with no host header, assuming all other web applications in the farm are configured to use host headers we have the following scenario:

SP 2013 Farm with Apps - No Listener Web App

In the above diagram, the client request for the SharePoint App URL DNS lookup is performed which points to the NLB address for the content web applications and traffic is therefore directed to the farm. The host header requests do not however match any of the web applications configured on the farm so SharePoint doesn’t know how to deal with the request.

We could try configuring SharePoint and IIS so that SharePoint App requests are sent to one of the existing web applications, however when using SSL we cannot bind more than one certificate to the same IIS web site and we cannot have an SSL certificate containing multiple domain wildcards. With non-SSL web applications, SharePoint could, in theory, do some clever routing by using the App Management Service Application to work out which web application hosts the SharePoint App web if one of the existing web applications were configured with no host header (I see another set of experiments on the horizon…).

To get around this issue with SSL traffic, a ‘Listener’ web application needs to be created. This web application should have no host header and therefore acts as a catchall for traffic not matching any of the other host headers configured. Note that if you already have a web application without a host header in SharePoint, you’ll have to recreate it with a host header before SharePoint will allow you to create another one. This results in the following scenario:

SP 2013 Farm with Apps - Listner App Config

The client request for the SharePoint App URL DNS lookup is performed which points to the NLB address for the content web applications and traffic is therefore directed to the farm. This time however, there is a ‘Listener’ web application that receives all traffic not bound for the main content web applications and internally the SharePoint HTTP module knows where to direct this traffic by using the App Management Service Application to work out where the SharePoint App web is located.

Note: The account used for the application pool for the ‘Listener’ web application must have rights to all the content databases for all of the web applications to which SharePoint Apps traffic will be routed. You could use the same account/application pool for all content web applications, but I’d recommend granting the rights per web applications as required using the ‘SPWebApplication.GrantAccessToProcessIdentity’ method instead.

As we need to use a different certificate on this ‘Listener’ web application, it used to be the case that it would have to be on its own IP address, however with Windows Server 2012 and 2012 R2, a new feature, Server Name Identity (SNI), was introduced that allows us to get around this limitation. To configure the above scenario using a single server IP address, the following steps need to be completed (note that in my scenario, I’ve deleted the default IIS web site; if it is only bound to port 80, then it should not need to be deleted):

  1. Open IIS Manager on the first of the WFE servers.
  2. Select the first of the content web applications and click ‘Bindings…’ in the actions panel at the right of the screen.
  3. Select the HTTPS binding and click ‘Edit…’
  4. Ensure that the ‘Host name’ field is filled in with the host header and that the ‘Require Server Name Indication’ checkbox is selected.
  5. Ensure that the correct SSL certificate for the URL in the ‘Host name’ field is selected.
  6. Ensure that ‘All Unassigned’ is selected in the ‘IP address’ field.
  7. Click OK to close the binding dialog and close the site bindings dialog.
  8. Repeat the above steps for all of the other content web applications with the exception of the ‘Listener’ web app.
  9. Ensure that the bindings for the ‘Listener’ web application do not have a host header. You will not be able to save the binding details for this web application if ‘Require Server Name Indication’ is selected, so leave it unselected for this web application. Select the Apps domain certificate for this web application.
  10. Start any required IIS SharePoint web applications that are stopped.
  11. Repeat the above steps for all of the other WFE servers.

The result of the steps above is that all of the content web applications with the exception of the ‘Listener’ web application should have a host header, be listening on port 443 on the ‘all unassigned’ IP address, have ‘Require SNI’ selected and have an appropriate certificate bound to the web application. The ‘Listener’ web application should have neither a host header, nor have ‘Require SNI’ selected, be listening on port 443 on the ‘all unassigned’ IP address and have the Apps domain certificate bound to it. This configuration allows:

  • Two wildcard certificates to be used, one for all of the content web applications, the other for the Apps domain bound to the ‘Listener’ web application with all applications listening for traffic on the same IP address/port combination, or
  • Multiple certificates to be bound, one per content web application, plus the Apps domain wildcard to be bound to the ‘Listener’ web application with all applications listening for traffic on the same IP address/port combination, or
  • Some combination of the above.

There are some limitations to using SNI, namely that a few browsers don’t support the feature. At the time of writing, IE on Windows XP (but then, you’re not using Windows XP, are you?) and the Android 2.x default browser don’t seem to support it, as don’t a few more esoteric browsers and libraries. All of the up-to-date browsers seem to work happily with it.

Windows Home Server 2011 Backup of UEFI/GPT Windows 8.1

Since upgrading to Windows 8.1 at home, I’ve had issues with backing up the computer using my Home Server (not that I helped by introducing a GPT disk and a UEFI rig at the same time…). The symptoms were that the client backup process appeared stuck at 1% progress for a long time before eventually failing.

I finally got a bit of time to look at the machines in question over the weekend and here are the issues that appeared to be causing problems for which I needed to find solutions:

  • The PC is a UEFI machine.
  • The PC uses a GPT hard disk.
  • A VSS error was appearing in the event log on the PC being backed up.
  • A CAPI2 error was appearing in the event log on the PC being backed up.

The first two issues were dealt with quickly by a hotfix for Home Server 2011: http://support.microsoft.com/kb/2781272. Note that the same issue also affects Windows Storage Server 2008 R2 Essentials and Windows Small Business Server 2011 Essentials. More information for these platforms can be found at http://support.microsoft.com/kb/2781278

The VSS error manifests as the event 8194 appearing in the event log of the PC that the backup attempt is run on:

VSS Error 8194

Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0x80070005, Access is denied.
. This is often caused by incorrect security settings in either the writer or requestor process.

Examination of the binary data for event 8194 indicates that ‘NT AUTHORITY\NETWORK SERVICE’ is account receiving the access denied error:

VSS Error Binary Data

Event 8194 is caused by the inability of one or more VSS system writers to communicate with the backup application VSS requesting process via the COM calls exposed in the IVssWriterCallback interface. The issue is not caused by a functional error in the backup application, but rather is a security issue caused by the selected VSS writers running as a service under the ‘Network Service’ (or ‘Local Service’) account, not the Local System or Administrator account. By default, in order for a Windows service to perform a COM activation it must be running as Local System or as a member of the Administrators group.

There are two ways to fix this issue; either change the account under which the erroring VSS writers are running from Network Service to Local System (at which point the service will be running with higher privileges than was originally designed), or add the Network Service account to the list of default COM activation permissions allowing this user account to activate the IVssWrtierCallback interface. This latter option is the preferred one to use and can be performed by completing the following steps:

  1. Run dcomcnfg to open the Component Services dialog.
  2. Expand Component Services, then Computers and then right-click on My Computer and select Properties:
    Component Services
  3. Select the COM Security tab and click the Edit Default… button in the Access Permissions area at the top of the dialog.
  4. Click Add and enter Network Service as the account to be added.
  5. Click OK and ensure that only the Local Access checkbox is selected.
  6. Click OK to close the Access Permission dialog, then clock OK to close the My Computer Properties dialog.
  7. Close the Component Services Dialog and restart the computer to apply the changes. Event 8194 should not longer appear in the event log for the Home Server backup.

The CAPI2 error manifests as the event 513 appearing in the event log of the PC that the backup attempt is run on:

CAPI2 Error 513

Cryptographic Services failed while processing the OnIdentity() call in the System Writer Object.
Details: AddLegacyDriverFiles: Unable to back up image of binary Microsoft Link-Layer Discovery Protocol.
System Error:
Access is denied.
.

The Microsoft Link-Layer Discovery Protocol binary is located at C:\Windows\System32\drivers\mslldp.sys. During the backup process, the VSS process running under the Network Service account calls cryptcatsvc!CSystemWriter::AddLegacyDriverFiles(), which enumerates all the driver records in Service Control Manager database and tries opening each one of them. The function fails on the MSLLDP record with an ‘Access Denied’ error.

The mslldp.sys configuration registry key is HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MsLldp and the binary security descriptor for the record is located at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MsLldp\Security.

Examining the security descriptor for mslldp using AccessChk (part of the SysInternals suite, available at http://technet.microsoft.com/en-us/sysinternals/bb664922) gives the following result (note: your security descriptor may differ from the permissions below):

C:\>accesschk.exe -c mslldp

Accesschk v5.2 - Reports effective permissions for securable objects
Copyright (C) 2006-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

mslldp
  RW NT AUTHORITY\SYSTEM
  RW BUILTIN\Administrators
  RW S-1-5-32-549
  R  NT SERVICE\NlaSvc

Checking the access rights of another driver in the same location gives the following result:

C:\>accesschk.exe -c mspclock

Accesschk v5.2 - Reports effective permissions for securable objects
Copyright (C) 2006-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

mspclock
  RW NT AUTHORITY\SYSTEM
  RW BUILTIN\Administrators
  R  NT AUTHORITY\INTERACTIVE
  R  NT AUTHORITY\SERVICE

In the case of mslldp.sys, there is no entry for ‘NT AUTHORITY\SERVICE’, therefore no service account will have access to the mslldp driver, hence the error.

To correct this issue, complete the following steps:

  1. From an elevated command prompt, run
    sc sdshow mslldp
    You should receive the following output, or something similar:
    D:(D;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BG)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SY)(A;;CCDCLCSWRPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SO)(A;;LCRPWP;;;S-1-5-80-3141615172-2057878085-1754447212-2405740020-3916490453)S:(AU;FA;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD)
    Note: Details on Security Descriptor Definition Language can be found at http://msdn.microsoft.com/en-us/library/windows/desktop/aa379567(v=vs.85).aspx
  2. Add the ‘NT AUTHORITY\SERVICE’ entry immediately before the S::(AU;FA;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD) entry and use this with the sdset option, for example using the output from the sdshow option above, this would be:
    sc sdset MSLLDP D:(D;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BG)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SY)(A;;CCDCLCSWRPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;SO)(A;;LCRPWP;;;S-1-5-80-3141615172-2057878085-1754447212-2405740020-3916490453)(A;;CCLCSWLOCRRC;;;SU)S:(AU;FA;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD)
    Note: The above should all be on a single line when entering/pasting it; do not include line breaks in the command. It’s also important to use the output you receive from the command rather than that which I got as yours may be different.
  3. Check the access permissions again with:
    accesschk.exe -c mslldp
    You should now see a list of permissions that includes ‘NT AUTHORITY\SERVICE’:
    C:\>accesschk.exe -c mslldp
  4. Accesschk v5.2 - Reports effective permissions for securable objects
    Copyright (C) 2006-2014 Mark Russinovich
    Sysinternals - www.sysinternals.com

    mslldp
      RW NT AUTHORITY\SYSTEM
      RW BUILTIN\Administrators
      RW S-1-5-32-549
      R  NT SERVICE\NlaSvc
      R  NT AUTHORITY\SERVICE

  5. Now that the ‘NT AUTHORIT\SERVICE’ permission has been added, Network Service should be able to access the mslldp.sys driver file.

Following the above fixes, my computer is now being successfully backed up using Home Server 2011.

Publishing ADFS using Web Application Proxy behind TMG

During a recent upgrade of ADFS from 2.0 to 3.0, we saw an interesting issue publishing the ADFS 3.0 proxy through TMG 2010.

The ADFS 2.0 proxy was published via TMG using a non-preauthenticating publishing web rule which had worked happily since ADFS was first used. When ADFS 3.0 was installed ad configured, the firewall rule was modified to change the IP address that should be used to direct traffic to the ADFS 3.0 proxy instead of the old ADFS 2.0 proxy. When tested, this generated an error in the browser of the user attempting to access the ADFS proxy to sign into their organisation account:

Error Code 64: Host not available

“The page cannot be displayed. Error Code 64: Host not available”

In addition, the test of the firewall rule fails with the error “Connectivity error. Error details: 64 – The specified network name is no longer available.”

This obviously meant that users could not sign in to access services authenticated using ADFS.

The solution is to use a non-web server publishing rule on TMG to simply forward all traffic to the ADFS proxy/Web Application Proxy, however this requires that a dedicated external IP address is available on TMG, or all applications need to be published using the Web Application Proxy instead of using TMG.

Workflow Manager 1.0 doesn't successfully install on Windows Server 2012 R2 unless VC Redist 11.0 or 12.0 already present

There seems to be an issue installing Workflow Manager 1.0 refresh on Windows Server 2012 R2. Upon completion, when clicking through to configure Workflow Manager, you are informed that neither Service Bus 1.0, nor (obviously) CU1 for Service Bus 1.0 has been installed.

Digging into the event log on the machine in question, this shows that VC Redist 11.0 or greater is required, and this is not installed automatically by the WebPI.

On Windows Server 2012, VC redist 12.0 is installed automatically by WebPI and the installation of Workflow Manager 1.0 Refresh completes successfully.

Obviously the solution is to install VC redist 11.0 or 12.0 before attempting to install Workflow Manager 1.0 refresh on Windows Server 2012 R2.

Upgrading Data Protection Manager from 2012 SP1 to 2012 R2

Our recent upgrade of Data Protection Manager (DPM) from 2012 SP1 to 2012 R2 generated one issue which wasn’t mentioned by the documentation on Technet.

The documentation on Technet is good and the procedure for upgrading DPM was very quick and easy. Pay attention to the Upgrade Sequencing documentation as not following this can result in component failure for which no rollback procedure exists.

Once the upgrade is complete, the agents on each protected client need to be upgraded and a consistency check run against all protected sources. Depending upon the volume of data protected, this may take an extended period of time. Following this procedure I saw two errors, one on each of our DPM servers.

In each case, the DPM database that was being protected on the other DPM server would not show as consistent. Running a consistency check would change the status icon to green for a few seconds before the consistency check would again fail.

The error was occurring because during the DPM upgrade procedure, the DPM database names had been changed. Originally the DPM database names were of the form

DPMDB_ServerName

Following the upgrade, the DPM database names were of the form

DPMDB_ServerNameGUID

In each case it was a simple task to modify the protection group to include the new database name and once a backup had been taken, remove the protection on the original database name.

Edit: I've come across another issue that occurs during the upgrade - the notification settings had been reset to remove the e-mail addresses that I had entered. This means that we were no longer receiving e-mail notifications for DPM issues. Again this was quick and easy to resolve, but it would have been useful for the upgrade documentation to flag it.

Importing Hyper-V Virtual Machines Used With SCVMM into Hyper-V Manager

I recently ran into an issue importing some virtual machines that had been used with SCVMM into Hyper-V. I needed to export the virtual machines for use with a development environment while still leaving the originals where they were in SCVMM. The procedure I was using was:

  1. Shut down the virtual machines to be exported
  2. Export the virtual machines using Hyper-V on the virtual host
  3. Copy the exported virtual machines to another host not connected to SCVMM
  4. Attempt to import the virtual machines (copy, create a new ID)

This failed with the following error message:

Hyper-V did not find virtual machines to import from location c:\virtualisation\server212\

This is because SCVMM adds a security section into the virtual machine xml file which stops the Hyper-V import process. An example from one of the virtual machines I was attempting to import is:

<security>
  <sd type="string">O:S-1-0-0D:(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-16695)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-16726)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-18157)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-18645)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;DA)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;SY)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-500)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-1703)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-1110)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-15767)(OA;;CC;5cf72d6e-61d5-4fbe-a05c-1e3c28d742fa;;S-1-5-21-583907252-842925246-1060284298-1694)</sd>
</security>

Manually removing the security section from the virtual machine xml file allowed Hyper-V Manager to successfully complete the import process.

Cross Flashing a Dell PERC H200 BIOS to Support Larger SATA Disks

The latest firmware for the Dell PERC H200 still doesn’t support SATA disks of greater than 2.2TB. In fact the card cannot even detect SATA drives that are larger than this.  As the PERC H200 is essentially a rebadged LSI 9211-8i card however, the firmware and BIOS from that card can be flashed onto the PERC H200 to provide support for larger SATA hard drives.

The procedure is as follows:

  1. Download the latest 9211-8i firmware package from LSI. I got my copy from http://www.lsi.com/products/storagecomponents/Pages/LSISAS9211-8i.aspx (click on the ‘Support & Downloads’ tab, then expand the firmware section). I downloaded the ‘9211_8i_Package_P16_IR_IT_Firmware_BIOS_for_MSDOS_Windows’ package and extracted the contents.
  2. Copy the required files from the extracted archive onto bootable media. I created a Windows 98SE boot USB stick and copied the files onto it. The required files are: 
    sas2flsh.exe – this is the  flash application. Copy the version from the sas2flash_dos_rel folder if using a dos boot disk.
    2118ir.bin – this is the firmware for the 9211-8i.
    mptsas2.rom – this is the card’s BIOS.
  3. Boot the server containing the PERC H200 from the bootable media. I’d recommend disconnecting any drives from the RAID card before flashing the firmware and BIOS.
  4. Once booted, change to the folder containing the files copied to the media, above, and issue the following commands
    sas2flsh –o –f 2118ir.bin
    sas2flsh –o –b mptsas2.rom
    sas2flsh –o –reset
    Each command should report success before you move onto the next one. If any indicate a failure, double check that you copied the correct files onto the bootable media.
  5. Reboot the server and test the card.

The above procedure allowed the H200 I was using to  detect and use two 3TB disks.

SharePoint 2013 on Windows Server 2012 R2 Preview and SQL Server 20143 CTP1

Following the recent release of Windows Server 2012 R2 Preview and SQL Server 2014 CTP1, I thought it would be an interesting experiment to see if I could get SharePoint 2013 running on the combination of these two previews. Most of the issues I encountered were around the product installers for SharePoint and the SharePoint pre-requisites:

  1. The SharePoint 2013 prereqinstaller.exe installer would not run and gave the error “This tool does not support the current operating system”.
    Pre-req_error
  2. The SharePoint binary installer would insist that not all of the server features required by SharePoint had been installed.
  3. The SharePoint binary installer failed when installing the oserver.msi file.
    Binary_setup_error
    Followed by:
    Setup_bootstrapper_stopped
    Examination of the setup logs (located at C:\Users\<username>\AppData\Temp\SharePoint Server Setup(<date-time).log) showed the following error:
    ”Error: Failed to install product: C:\MediaLocation\SharePoint2013InstallationMedia\global\oserver.MSI ErrorCode: 1603(0x643).”

SQL Server 2014 CTP1 seemed to install and work fine, although I did experience a couple of crashes during the installation procedure.

The following are workarounds for the issues seen above:

Preparing the server manually instead of using the prereqinstaller.exe involves adding the required server features and then manually installing the SharePoint pre-req files.

To add the required server features, use the following PowerShell commands in an elevated PowerShell prompt:

Import-Module ServerManager

Add-WindowsFeature Net-Framework-Features,Web-Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-Dev,Web-Asp-Net,Web-Net-Ext,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-Digest-Auth,Web-Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Tools,Web-Mgmt-Console,Web-Mgmt-Compat,Web-Metabase,Application-Server,AS-Web-Support,AS-TCP-Port-Sharing,AS-WAS-Support, AS-HTTP-Activation,AS-TCP-Activation,AS-Named-Pipes,AS-Net-Framework,WAS,WAS-Process-Model,WAS-NET-Environment,WAS-Config-APIs,Web-Lgcy-Scripting,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer

If the server is not connected to the internet, the following PowerShell commands can be used (assuming that the installation media is available on D:\):

Import-Module ServerManager

Add-WindowsFeature Net-Framework-Features,Web-Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-Dev,Web-Asp-Net,Web-Net-Ext,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-Digest-Auth,Web-Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Tools,Web-Mgmt-Console,Web-Mgmt-Compat,Web-Metabase,Application-Server,AS-Web-Support,AS-TCP-Port-Sharing,AS-WAS-Support, AS-HTTP-Activation,AS-TCP-Activation,AS-Named-Pipes,AS-Net-Framework,WAS,WAS-Process-Model,WAS-NET-Environment,WAS-Config-APIs,Web-Lgcy-Scripting,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer –Source D:\sources\sxs

Scripts are available to download the SharePoint pre-reqs; the one I used is located at http://gallery.technet.microsoft.com/office/Script-to-SharePoint-2013-702e07df

I chose to install each of the pre-reqs manually, however the Windows Server App Fabric installer should be installed using the command line rather than the GUI as I couldn’t successfully get it installed with the options required using the GUI. To install Windows Server App Fabric, open an admin PowerShell console and use the following commands (assuming the installation file is located at c:\downloads):

$file = “c:\downloads\WindowsServerAppFabricSetup_x64.exe”

& $file /i CacheClient”,”CachingService”,”CacheAdmin /gac

Note the locations of the “ marks in the second command line, these should be around the commas.

Once this is installed, the Windows Server AppFabric update (AppFabric1.1-RTM-KB2671763-x64-ENU.exe) can also be installed. For reference, the other pre-reqs that I manually installed were:

  • MicrosoftIdentityExtensions-64.msi
  • setup_msipc_x64.msi
  • sqlncli.msi
  • Synchronization.msi
  • WcfDataServices.msi

In each of the above cases, I accepted the default installation options.

Following the installation of the SharePoint 2013 pre-reqs, the SharePoint 2013 binary installer insisted that not all of the required server features were installed. Shutting the server down and restarting it (sometimes twice) seemed to solve this issue.

To solve the issue experienced during the binary installation of SharePoint 2013, a modification of the oserver.msi file is required. This can be achieved using ‘Orca’. Orca is available as part of the Windows Software Development Kit (SDK) for Windows 8, which can be downloaded from http://msdn.microsoft.com/en-us/library/windows/desktop/hh852363.aspx

Once the SDK is installed, start Orca, then use it to open the oserver.msi file located in the ‘global’ folder of the SharePoint 2013 installation media (taking a backup of the original file first, of course…), then navigate to the ‘InstallExecusteSequence’ table and drop the ‘ArpWrite’ line:

Orca_oserver_msi_modification

Save the file over the original, the start the binary installation in the usual way.

Here’s a shot of SharePoint 2013 working on Windows Server R2 Preview with SQL Server 2014 CTP1:

SP2013_on_2012R2Preview_SQL2014CTP1

Please note, all of the above is done entirely at your own risk and is for testing purposes only. Don’t even think of using it in production…

Changing from Incandescent to LED Lighting

Posts on this blog are usually IT related so I thought it might make a change to write about something that, while still technology related, is a little different…

As some of my colleagues will attest to, I firmly believe in a well insulated house and buying products for the home that are as efficient as possible. My Home Server, for example, runs on a low power CPU in an effort to reduce the day-to-day running costs.

Many of our lights at home have already been converted to use CFL bulbs, replacing the original incandescent bulbs. I’m not a great fan of CFL bulbs however for a couple of reasons:

  • The start-up time of some bulbs still seems to be long (not that that is necessarily an issue on a winter morning when I don’t want to be immediately blinded when switching the lights on)
  • The bulbs contain mercury (albeit in small amounts); if you break a bulb, both the bulb and the items you use to clean up should be treated as hazardous waste (see http://archive.defra.gov.uk/environment/business/products/roadmaps/lightbulbs.htm for details)

On the plus side however, exchanging the incandescent bulbs for CFL ones has significantly reduced the number of bulbs I have to change. Our living room light, for example, used to need a bulb changing on average once per month as opposed to about once every 2-3 years for CFL bulbs. In addition, my (albeit approximate) calculations suggest that we save about £5-7 per bulb per year in energy costs, so in general the CFL bulbs pay for themselves within a few months.

Many of our new lights however use GU10 incandescent bulbs and while GU10 CFL bulbs are available, they are typically longer than the original incandescent bulbs and so will not fit in many of the housings (e.g. in spotlights etc.)

LED bulbs are however available and with recent improvements to LED technology, can now match the light output of the incandescent bulbs they replace. Even better, many of the GU10 LED replacements are exactly the same size as the original bulbs and are therefore direct replacements. I like LED bulbs for the following reasons:

  • Fast start – LED lights are at full output almost immediately. In fact they beat an incandescent bulb, one of the reasons that they are used in brake lights on many cars these days
  • ‘Warm white’ bulbs are now available. White LEDs always used to be ‘cold white’, i.e. showed a significant blue cast, making them a very harsh light. Great for some specific applications, but not so nice for everyday use. Dimmable bulbs are also available.

The bulbs I favour are as follows:

LED_GU10

These have 20 SMD LEDs and produce a light output equivalent to a 50W incandescent. Hopefully if there are failures of the individual LEDs within the bulb, the failure will be gradual rather than suddenly stopping working completely, giving us time to source replacements.

I have, however, found an issue when replacing a set of incandescent GU10 bulbs with their LED equivalents. We replaced a set of 4 bulbs in a kitchen fitting with LED bulbs and found that when the lights were switched off, the bulbs still glowed gently. With a single incandescent bulb and 3 LED bulbs in the fitting, the issue didn’t occur. Following a little research, it became obvious that even with a properly earthed system, the capacitively coupled power from live to switched live is enough to cause an LED bulb to glow gently. With an incandescent bulb in the fitting, the resistance of this bulb was low enough to effectively absorb the leakage current and stop the LEDs from glowing.

There is a solution to the issue, which is to fit a resister-capacitor combination (a contact suppressant; not designed for this purpose, but works perfectly well) across the terminals of the light fitting. These can either be a DIY solution, but I have found a product recommended for this purpose, which is a combination 0.1uF capacitor and a 100 ohm resistor in a package suitable for use with 240V AC:

Capacitor_Resistor_Package

The link goes to a Farnell page, but I am sure that there are also available from other quality electronics resellers. There is also a 0.22uF version for longer circuits, should that be required. I’d strongly recommend using a bit of heat-shrink tubing on each lead of the package to ensure that the supply will not come in contact with the light casing.

Adding one of these devices to our kitchen light has completely solved the issue of the bulbs glowing even when switched off. The LED bulbs are saving something like 90% of the energy (and therefore the running costs) that would be consumed by incandescent bulbs and hopefully they will have a very long life span.