Publishing ADFS using Web Application Proxy behind TMG

During a recent upgrade of ADFS from 2.0 to 3.0, we saw an interesting issue publishing the ADFS 3.0 proxy through TMG 2010.

The ADFS 2.0 proxy was published via TMG using a non-preauthenticating publishing web rule which had worked happily since ADFS was first used. When ADFS 3.0 was installed ad configured, the firewall rule was modified to change the IP address that should be used to direct traffic to the ADFS 3.0 proxy instead of the old ADFS 2.0 proxy. When tested, this generated an error in the browser of the user attempting to access the ADFS proxy to sign into their organisation account:

Error Code 64: Host not available

“The page cannot be displayed. Error Code 64: Host not available”

In addition, the test of the firewall rule fails with the error “Connectivity error. Error details: 64 – The specified network name is no longer available.”

This obviously meant that users could not sign in to access services authenticated using ADFS.

The solution is to use a non-web server publishing rule on TMG to simply forward all traffic to the ADFS proxy/Web Application Proxy, however this requires that a dedicated external IP address is available on TMG, or all applications need to be published using the Web Application Proxy instead of using TMG.

Cannot remove folder “SharePoint site name”

I recently ran into this issue while trying to move a series of sub-sites to site collections on a SharePoint farm. In order to achieve part of the process, I’d scripted the deletion of the sub-site tree using the stsadm –o deleteweb command once I’d exported it as there were quite a number of sub-sites and I wanted to try and save a little time.

I’d see the error message ‘cannot remove folder “sub-site name”’ for a couple of the sub-sites at a particular level, then the next couple might well be removed correctly, then more ‘cannot remove…’ errors.

To diagnose the issue a little more, I tried using the alternate options for the deleteweb stsadm command, namely

stsadm –o deleteweb –webid <sub-site ID> –databasename <content web application DB name> –databaseserver <DB server> -force

Note: To find a list of the sub-site IDs, run the following command:
stsadm –o enumallwebs –databasename <content web application DB name>
This will produce a list of all of the sub-sites including their URL and ID

Using the alternate options for the deleteweb stsadm command provided me with a lot more information on what the issue was that was stopping me from deleting some of the sub-sites.  In may case, the error shown was ‘The transaction log for database <DB name> is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases’.

Checking the autogrowth options for the log file for the content database in question did indeed show that the file size was limited to quite a small value. Increasing the autogrowth limit on the transaction log to a more reasonable size immediately got me up and running again.

Manually removing a SQL Reporting Services instance from a scale out deployment

If you need to remove an instance of SQL Reporting Services from a scale out deployment, but for some reason cannot contact the instance you wish to remove (e.g. the computer has failed or been rebuilt without removing it from the scale out deployment first) you can do this manually from one of the other computers in the scale out deployment by following these steps:

  • To list the announced report servers currently in the database:
    RSKeyMgmt –l
  • This will provide you with a list in the format
    MachineName\Instance – <GUID of the instance>
    Note the GUID of the instance you wish to remove
  • To remove the instance of SQL Reporting Services:
    RSKeyMgmt –r <GUID of the instance noted above>
  • You will be asked to confirm that you wish delete the key you have entered.

Note that RSKeyMgmt.exe is located in C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\ on a 64-bit SQL 2008 Reporting Services instance.

Guest NLB issues on Hyper-V (Windows Server 2008 R2)

One of the issues I’ve seen during our migration of virtual machines to our new Windows Server 2008 R2 Hyper-V cluster relates to network load balancing (NLB).  We have a number of NLB setups running which will need migrating in time.  My first test migration of a pair of NLB virtual machines (actually, technically a trio of servers making up a SharePoint farm) didn’t go as smoothly as I’d hoped.

The machines in question have been running on a Windows Server 2008 Hyper-V machine quite happily for some time.  I followed the procedure we’ve used to migrate other machines to our new Windows Server 2008 R2 Hyper-V cluster, connecting both network adaptors to the appropriate network when the import had completed.  When I looked at the network settings in the GUI, two network adaptors showed up and the configuration at first glance seemed okay.  When looking at the network configuration using ipconfig however, only the values for one network adaptor (the primary, i.e. non-NLB, adaptor) were shown, with the NLB adaptor missing in action.

In addition, NLB manager showed the following error when I tried to reconfigure the cluster:

adaptor misconfigured detail

The solution to the issue is actually simple; in the Hyper-V VM settings for the NLB network adaptor, turn on MAC address spoofing:

enable spoofing of MAC address

This immediately fixed the issues we were seeing with the NLB adaptor of the machines we were migrating.

Importing Hyper-V machines into a Hyper-V 2008 R2 cluster

At Black Marble, we’re in the process of migrating some of our virtual machines to a Windows Server 2008 R2 Hyper-V cluster.  The process of migrating machines from a single Hyper-V host to a Hyper-V cluster is not quite as straightforward as migration of a machine from one single host to another.  In addition, our migrations are made slightly more interesting as our Hyper-V cluster is built on Server 2008 R2 Core machines, so no GUI interface on those machines to help us!

Due to our cluster being Server 2008 R2 Core machines, we do all of our administration remotely.  Once the cluster is built, we rarely spend much time directly connected to the cluster machines.  Most of the administration for virtual machines is done from the Failover Cluster Manager on another server we use as an application server.  While the Failover Cluster Manager allows us to create new virtual guests directly from the interface, there is no apparent way to import virtual machines that already exist onto the cluster directly from the interface.

Importing pre-existing virtual guests onto the cluster therefore becomes a two stage process; firstly import the machine using Hyper-V Manager, then make them highly available.

To import the virtual machine, the following steps need to be taken:

  1. On the Hyper-V host running the machine you wish to migrate, export the virtual guest.  In the case of a few of our machines, they were built using differencing disks and we took the decision to merge the disks so we didn’t have base disk stacks littered all over the place.  As our virtual machines were hosted on Windows Server 2008 Hyper-V, this meant that we had to delete any snapshots we had as well and then switch off the machines and allow the background disk merge required in these circumstances to finish before we could merge the differencing disk stack we’d created.
  2. Once the export had completed, copy the resultant files to an appropriate location on the CSV disk on the new Windows Server 2008 R2 Hyper-V cluster.  The use of a CSV location is required to allow us to make the virtual guest highly available later.
  3. Using Hyper-V manager connected to the specific virtual host in the new cluster the migrated machine should run on initially, import the virtual machine.  Note that with Hyper-V R2, you can choose to duplicate the files so that the virtual machine can be imported again should you need to.
  4. Once the virtual machine has been imported, you’ll need to check the settings and may need to connect the network adaptor(s) to the appropriate virtual network(s).  Note that the required virtual networks need to be created individually on each of the Hyper-V cluster nodes.

At this point, you have a virtual guest that has been migrated to its new host, but has not been made highly available.  To achieve this, the following steps need to be taken:

  1. Connect to the Windows Server 2008 R2 Hyper-V cluster using Failover Cluster Manager.
  2. Right-click on the ‘Services and applications’ header in the left pane of the Cluster Manager and select ‘Configure a Service or Application…’
  3. A new window, the High Availability Wizard, will open. Click next on the first page, then select ‘Virtual Machine’ from the list of available service and application types on the next screen and click next
    HA_wizard_step_2
  4. The imported virtual machines that have not been made highly available will be presented as a list with checkboxes beside them. Select the virtual machines you wish to make highly available and click next
    HA_wizard_step_3
  5. Click next on the confirmation screen and wait until the wizard completes. Click finish on the summary page, unless you wish to vie the more detailed report (if for example an issues were encountered during the HA wizard).

Your migrated, highly available virtual machines should now be available via the Failover Cluster Manager.  You may wish to modify the properties of the migrated high availability virtual machines to set items such as preferred owner and failover/failback settings before starting them.