BM-Bloggers

The blogs of Black Marble staff

Test-SPContentDatabase False Positive

I was recently performing a SharePoint 2013 to 2016 farm upgrade and noticed an interesting issue when performing tests on content databases to be migrated to the new system.

As part of the migration of a content database, it’s usual to perform a ‘Test-SPContentDatabase’ operation against each database before attaching it to the web application. On the farm that I was migrating, I got mixed responses to the operation, with some databases passing the check successfully and others giving the following error:

PS C:\> Test-SPContentDatabase SharePoint_Content_Share_Site1

Category        : Configuration
Error           : False
UpgradeBlocking : False
Message         : The [Share WebSite] web application is configured with
                  claims authentication mode however the content database you
                  are trying to attach is intended to be used against a
                  windows classic authentication mode.
Remedy          : There is an inconsistency between the authentication mode of
                  target web application and the source web application.
                  Ensure that the authentication mode setting in upgraded web
                  application is the same as what you had in previous
                  SharePoint 2010 web application. Refer to the link
                  "
http://go.microsoft.com/fwlink/?LinkId=236865" for more
                  information.
Locations       :

This was interesting as all of the databases were attached to the same content web application, and had been created on the current system (I.e. not migrated to it from an earlier version of SharePoint) and therefore should all have been in claims authentication mode. Of note also is the reference to SharePoint 2010 in the error message, I guess the cmdlet hasn’t been updated in a while…

After a bit of digging, it turned out that the databases that threw the error when tested had all been created and some initial configuration applied, but nothing more. Looking into the configuration, there were no users granted permissions to the site (except for the default admin user accounts that had been added as the primary and secondary site collection administrators when the site collection had been created), but an Active Directory group had also been given site collection administrator permissions.

A quick peek at the UserInfo table for the database concerned revealed the following (the screen shot below is from a test system used to replicate the issue):

UserInfo Table

The tp_Login entry highlighted corresponds to the Active Directory group that had been added as a site collection administrator.

Looking at Trevor Seward’s blog post ‘Test-SPContentDatabase Classic to Claims Conversion’ blog post showed what was happening. When the Test-SPContentDatabase cmdlet runs, it’s looking for the first entry in the UserInfo table that matches the following rule:

  • tp_IsActive = 1 AND
  • tp_SiteAdmin = 1 AND
  • tp_Deleted = 0 AND
  • tp_Login not LIKE ‘I:%’

In our case, having an Active Directory Group assigned as a site collection administrator matched this set of rules exactly, therefore the query returned a result and hence the message was being displayed, even though the database was indeed configured for claims authentication rather than classic mode authentication.

For the organisation concerned, having an Active Directory domain configured as the site collection administrator for some of their site collections makes sense, so they’ll likely experience the same message next time they upgrade. Obviously in this case it was a false positive and could safely be ignored, and indeed attaching the databases that threw the error to a 2016 web application didn’t generate any issues.

Steps to reproduce:

  1. Create a new content database (to keep everything we’re going to test out of the way).
  2. Create a new site collection in the new database adding site collection administrators as normal.
  3. Add a domain group to the list of site collection administrators.
  4. Run the Test-SPContentDatabase cmdlet against the new database.

DPM 2012 R2 SQL Backups Failing After November 2014 OS Updates

Following the installation of the November 2014 OS updates and the updated DPM agent (to update rollup 4) onto a number of servers, I saw failures when attempting to perform a backup of SQL Server 2012 databases hosted on Windows Server 2012. These manifested as ‘replica is inconsistent’ in the DPM console:

DB Replica Inconsistent

On the affected SQL Server, the following error was appearing in the server event log each time a consistency check was triggered:

Event 4354 - Failed to fire the RequestWriterInfo method

The COM+ Event System failed to fire the RequestWriterInfo method on event class {FAF53CC4-BD73-4E36-83F1-2B23F46E513E} for publisher  and subscriber . The display name of the subscription is "{E6057DCA-9DE3-42FC-9A6E-A057156277B4}". The HRESULT was 80042318.

I tried a number of things to resolve the issue:

  1. Modified the protection group. Initially, the protection group wizard did not show the ‘SQL Server’ option and I could not re-add the SQL Server databases. The agent required an update (again) and following this, I could then re-add the SQL Server databases, however when synchronised, the databases were still reporting ‘replica is inconsistent’.
  2. Removed the protection from the affected server, removed the DPM agent, rebooted, then reinstalled and attached the DPM agent and re-added the SQL Server databases to the protection group. This also had no effect, with the databases still reporting as ‘replica is inconsistent’.
  3. Updated the affected server to SQL Sever 2012 SP2 CU2 as the cumulative update mentioned VSS related fixes. Again this had no effect.

I have a Microsoft support call open at the moment in an effort to determine the root cause of the issue, but have found that removing a specific update appears to resolve the issue. The update in question is KB3000853. KB3000853 is an update rollup for Windows 8, Windows RT and Windows Server 2012 and contains a number of fixes including KB2996928 which is a backup related hotfix that references VSS. My suspicion is that this update is what has caused the issue, but will update this post when I get a final resolution from Microsoft support.

Update:

I've had this confirmed as an issue with KB3000853 by the Microsoft engineer I've been working with.
The workaround for the moment is to either change the account that runs the SQL VSS Writer service to the domain admin account (I assume an account that has local admin permissions, but have not tested this so far), or remove KB3000853, at which point the backups start functioning correctly again.
There is currently no confirmed release date for an updated version of KB3000853.

Update 2:

There’s now a revised hotfix to correct the issue experienced above. The steps to correct the issue are:

  1. Install KB3000853.
  2. Install hotfix KB2996928.

This combination corrects the issues on the servers on which I have installed them.

WSUS Not Downloading Updates

During a recent WSUS upgrade from an old server to a new virtual machine running Windows Server 2008 R2, I saw an issue with the server not downloading updates correctly. The server appeared to synchronise correctly, but then no updates were downloaded.

We originally saw an issue like this when we started using Microsoft Threat Management Gateway, and the errors reported in the application event log on the new WSUS server were the same, namely:

Error 10032: The server is failing to download some updates

Error 364: Content file download failed. Reason: The server does not support the necessary HTTP protocol. Background Intelligent Transfer Service (BITS) requires that the server support the Range protocol header.

Microsoft KB article 922330 provides a solution for this specific issue, in our case we're using a pre-existing SQL Server, so went with

"%programfiles%\Update Services\Setup\ExecuteSQL.exe" -S %Computername% -d "SUSDB" -Q "update tbConfigurationC set BitsDownloadPriorityForeground=1"

However this didn't solve the issue.

In our case, the existing instance of SQL was on another server, so the command should have been:

"%programfiles%\Update Services\Setup\ExecuteSQL.exe" -S SQLServerName\Instance -d "SUSDB" -Q "update tbConfigurationC set BitsDownloadPriorityForeground=1"

Once the revised command had been run and the WSUS server restarted, the update downloads started automatically.

CRM 2011 Update Rollup 6 (and beyond) timeout

On an instance of CRM 2011 that was being patched to Update Rollup 6 prior to patching to Update Rollup 8, the following error occurred:

System.Exception: Action Microsoft.Crm.Setup.Common.Update.DBUpdateAction failed. ---> System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.

The error was displayed as both a dialog on screen, and within the log file, KB2600640.log located at %APPDATA%\Microsoft\MSCRM\Logs

The solution was to raise the timeout that CRM applied for OLEDB connections from the original 30 seconds by creating two new registry keys at HKLM\SOFTWARE\Microsoft\MSCRM:

OLEDBTimeout (DWORD), value 86400 (decimal)
ExtendedTimeout (DWORD), value 1000000 (decimal)

Then restart the CRM application pool within IIS.

Once the new settings were in place, the upgrade proceeded normally.

Renaming the PerformancePoint Service Application database in SharePoint 2010

When creating the PerformancePoint Service Application, there is no way to control the name of the database that is created, not even when using PowerShell to create the Service Application. The database that gets created is in the form

<Service Application Name>_GUID

which for some reason a good many DBAs are not too keen on!

The database can however be renamed by following these steps:

  • Stop the PerformancePoint service on all SharePoint servers in the farm that are running the service using the ‘services on servers’ area of Central Administration:
    Stopping the PerformancePoint Service
  • Rename the database and log file – there are two ways of completing this; I prefer the second option of the two outlined below as it completely renames all of the references to the database, but it is a more involved process:
    1. Open SQL Server Management Studio on the SQL Server for the farm.
      Select the PerformancePoint Service Application database and then click again to allow renaming:
      Rename PerformancePoint DB in GUI
      Rename the database to match the naming convention you wish to use for farm databases. Note that this only renames the database friendly name as shown in SQL Server Management Studio and not the file names or the logical database and log file names.
    2. Alternatively:
      Open SQL Server Management Studio on the SQL Server for the farm.
      If you wish to, you can change the recovery mode of the PerformancePoint database to ‘simple’; this saves having to backup and restore a log file as well as the database file.
      Backup the PerformancePoint database created during the Service Application creation process.
      Restore the PerformancePoint database from the backup completed to a new database name which matches the naming convention you wish to use for farm databases. Note that the default naming convention for the log files on restore appends ‘_1’ to the database name to form the log file name; you may wish to change this to ‘_log’ to match the other log files that the database server hosts. The backup and restore will change the filenames used for the databases and the display name shown in SQL Server Management Studio, but not the logical database names. To change the logical database names, first find the logical names of the database and log for the database you wish to change; you can find this information either by taking note of the original database name when it was created, or from the ‘files’ section of the database properties screen within SQL Server Management Studio:
      Database Logical Names
      Execute the following two SQL queries:

      ALTER DATABASE <new PerformancePoint database name> MODIFY FILE (NAME="<original logical database name>", NEWNAME="<new PerformancePoint database name>")

      ALTER DATABASE <new PerformancePoint database name> MODIFY FILE (NAME="<original logical log file name>", NEWNAME="<new PerformancePoint database name>_log")

      If you changed the database recovery mode to ‘simple’, change it back to ‘full’.
  • On one of the SharePoint servers in the farm, open an instance of the SharePoint 2010 Management Shell, ensuring that it is run as administrator and issue the following PowerShell Commands:

    $newdatabasename = "<new PerformancePoint database name>"
    Set-SPPerformancePointServiceApplication -Identity "<name of the PerformancePoint Service Application>" -SettingsDatabase $newdatabasename
  • Restart the PerformancePoint service on the servers in the farm it was running on originally.
  • Delete the original PerformancePoint database that was created during the Service Application creation from SQL Server Management Studio.