The blogs of Black Marble staff

Audio problem on Windows 8 RP and Lenovo W520 with Lync 2013

I have been really pleased with Windows 8 RP on my Lenovo W520, I have had no major problems. I have seen the issues with slow start-up of networking after a sleep as others have seen, but nothing else.

However today I tried to do a Lync audio call with Lync 2013 Beta and found I had no audio. Up to now I had just used the drivers Windows 8 installed which had seemed OK. It turns out I had to install the Conexant Audio Software from the Lenovo site. Once I did this and Lync was restarted the audio leapt into life. As I remember I had a similar issue with Windows 7 and getting the audio to work correctly on my Lenovo base station.

Top tip: use the up to date drivers

Two problems editing TFS2012 build workflows with the same solution

Updated 30th Aug 2012: This post is specific to TFS/VS 2012 RC - for details on the RTM see this updated post

Whilst moving over to our new TFS2012 system I have been editing build templates, pulling the best bits from the selection of templates we used in 2010 into one master build process to be used for most future projects. Doing this I have hit a couple of problems, turns out the cure is the same for both

Problem 1 : When adding custom activities to the toolbox Visual Studio crashes

See the community activities documentation for the process to add a items to the toolbox, when you get to step to browse for the custom assembly you get a crash.


Problem 2: When editing a process template in any way the process is corrupted and the build fails

When the build runs you get the error (amongst others)

The build process failed validation. Details:
Validation Error: The private implementation of activity '1: DynamicActivity' has the following validation error:   Compiler error(s) encountered processing expression "BuildDetail.BuildNumber".
Type 'IBuildDetail' is not defined.





The Solution

Turns out the issue that caused both these problems was that the Visual Studio class library project I was using to host the XAML workflow for editing was targeting .NET 4.5, the default for VS2012. I changed the project to target .NET 4.0, rolled back the XAML file back to an unedited version and reapplied my changes and all was OK.

Yes I know it is strange, as you never build the containing project, but the targeted .NET version is passed around VS for building lists and the like, hence the problem.

Problems I had to address when setting up TFS 2012 Lab Environments with existing Hyper-V VMs

Whilst moving all our older test Hyper-V VMs into a new TFS 2012 Lab Management instance I have had to address a few problems. I already posted about the main one of cross domain communications. This post aims to list the other workaround I have used.

MTM can’t communicate with the VMs

When setting up an environment that includes existing VMs it is vital that the PC running MTM |(Lab Center) can communicate with all the VMs involved. The best indication I have found that you will not have problems is to use a simple Ping. If you are creating a SCVMM environment you need to be able to Ping the fully qualified machine name as it has been picked up by Hyper-V e.g: server1.test.local. If creating a standard environment you only need to be able to Ping the name you specify for the machine e.g: server1 or maybe

If Ping fails then you can be sure that the MTM create environment verify step will also fail. The most likely reasons both are failing are

  • There are DNS issues, the VM names are missing, leases have expired, they are not in the domains expected or are just plain wrong. I found the best solution for me is to edit the local hosts file on the PC running MTM. Just add the name and fully qualified name as well as the correct IP address. You should then be able Ping the VM (unless there is a firewall issue, see below). The host file is only needed on the MTM PC whist the environment is created, once the environment is setup the hosts file is not needed.
  • File and print sharing needs to be opened through the firewall on the VM (control panel > firewall > allow applications through firewall)
  • Missing/out of date Hyper-V extensions on the VM. This only matters if it is a SCVMM environment being created as this is how the fully qualified is found. This is best spotted in MTM as you get a error on the Machine properties tab. The fix is to reinstall the extensions via the Hyper-V Manager  (Actions > Insert Integration Services Disk, and maybe run the setup on the VM if it does not start)

Can’t see a running VM in the list of available VMs

When composing an environment from running VMs one problem I had was that though a VM was running it did not appear in the list in MTM. This turned out to be due to the fact that the VM had meta data associating it with an different environment (in my case a dating back to our TFS2010 instance).

This is easy to fix, in SCVMM or Hyper-V Manager open the VM settings and make sure the name/ note field (red box below) is empty.


Once the settings are saved you will have to wait a little while before SCVMM picks up the changes and lets you copy of MTM know the VM is available.

Getting TFS 2012 Agents to communicate cross domain

I don’t know about your systems but historically we have had VMs running in test domains that are connected to our corporate LAN. Thus allowing our staff and external testers to access them from their development PCs or through our firewall after providing suitable test domain credentials. These test setups are great candidates for the new TFS Lab Management 2012 feature Standard environments. It does not matter if they are hosted as physical devices, or on Hyper-V or VMware.

However, the use of separate domains raises issues of cross domain authentication, irrespective of the virtualisation technology. It is always a potentially confusing area. If we want the ability to use the deployment and testing features of Lab Management, what we need to achieve is Test Agents on each VM, that talks to a Test Controller which is registered to a TFS Team Project Collection. Not too easy when spread across multiple domains.

With TSF2012 the whole process of getting agents to talk to their controller was greatly eased. Lab Management does it for you much of the time if you provide it with a corp\tfslab domain account who is a member of the Project collection test service accounts group in TFS.

The summary of the scenarios is as follows

Scenario How to achieve it
If your test VMs are in either a SCVMM managed or standard environment but are joined to your corp domain Lab Management wires it all up automatically using your corp\tfslab account
If your test VMs are in either a SCVMM managed or standard environment that is not domain joined i.e: just in a workgroup Lab Management wires it all up automatically using your corp\tfslab account
If your test VMs are in a SCVMM managed network isolated environment Lab Management wires it all up automatically using your corp\tfslab account
If your test VMs are in either a SCVMM managed (not network isolated) or standard environment and are in their own test domain You have to do some work

If like me you end up with the fourth scenario, the key is to provide a test controller within the test domain. This must be configured to talk back to TFS on the corp domain. This can all done with local machine accounts on the test controller and TFS server with matching names and passwords, what I think of as shadow accounts.

So for example, we have the following scenario of a corp domain with a DC and various TFS servers and controllers and a test domain containing three servers.



So the process to get the test agents on the test domain talking to TFS on the corp domain is as follows:

  1. On the TFS server (called in above graphic)
    1. Open the Control Panel > Computer Manager and create a new local user called tfslabshadow. Set the password and that the user does not need to change it on first login and that it does not expire
    2. In the TFS administration console add the new user tfsserver\tfslabshadow to the Project collection test service accounts group
  2. On a machine (called server.test.local in above graphic) within the test domain (this cab be any VM in the domain running Windows other than the DC)
    1. Open the Control Panel > Computer Manager and create a new local user called tfslabshadow with the same password as on the same account on the tfsserver
    2. Add this user to the local administrators group for that server.
    3. Login as this user
    4. Install the Visual Studio 2012 Test controller
    5. When the installation is complete the configuration tool will launch. Set the service to run as the tfslabshadow and register it to connect to the TFS server with this account too.
      Note - When you first load the configuration tool you need to browse for the TFS server and enter its URL. If you have your shadow accounts working correctly you should not need to enter any other credentials at this point.
      Note - You can enter the local user name in either the .\tfslabshadow or server\tfslabshadow format

    6. If you have all the settings correct then you should be able to apply the changes without any errors and the new test controller should be registered. If you get any errors they usually are fairly clear at this point when you look in the log, you probably forgot to place a user in some group somewhere.
  3. From a PC running Test Manager 2012 (MTM) on the corp domain
    1. Go into the Lab Center
    2. Create a new environment (can be SCVMM or Standard) containing the machines in the test domain (or open an existing environment if you have one that was not correctly configured)
    3. On the Advanced tab you should be able to select the new test controller server that is hosted within the test domain
    4. You can make any other setting changes you require (remember on the machines tab to enter the test domain login credentials, they will have defaulted to your current ones). When you are done you can select Verify. I had problem here due to DNS entries. From the PC running MTM I could ping server, but MTM was trying to communicate using the name server.test.local. To get around this I added an entry in my local host files. I have also a seen VMs that are not registered in DNS at all, again a local hosts file fixes the problem. This is only required for the initial verification and deployment/configuration once this is done the host entries can be removed if you want.
    5. Once verification has passed save the changes and after a short wait the environment should finish configuring itself showing no errors

So I hope I have provided a step by step to help you get around issues with cross domain testing in Lab Management. However, it is still important to remember the exceptions

  1. As we are using local machine accounts you cannot have the TFS server or the Test controller running on a domain controller (as a DC cannot have local machine accounts). If your environment is a single box that is a DC then you either have to setup a cross domain two way trust between test and corp or rebuild the environment as a workgroup or network isolated environment.
  2. The shadow account cannot have the same name as the corp\tfslab account i.e: tfslab. If you try to use the same name for the local machine and domain accounts the matching of the two local machine accounts will fails as on the TFS server end it will not be able to decide whether to use corp\tfslab or tfsserver\rfslab

For more details on this general area see MSDN

WSUS Not Downloading Updates

During a recent WSUS upgrade from an old server to a new virtual machine running Windows Server 2008 R2, I saw an issue with the server not downloading updates correctly. The server appeared to synchronise correctly, but then no updates were downloaded.

We originally saw an issue like this when we started using Microsoft Threat Management Gateway, and the errors reported in the application event log on the new WSUS server were the same, namely:

Error 10032: The server is failing to download some updates

Error 364: Content file download failed. Reason: The server does not support the necessary HTTP protocol. Background Intelligent Transfer Service (BITS) requires that the server support the Range protocol header.

Microsoft KB article 922330 provides a solution for this specific issue, in our case we're using a pre-existing SQL Server, so went with

"%programfiles%\Update Services\Setup\ExecuteSQL.exe" -S %Computername% -d "SUSDB" -Q "update tbConfigurationC set BitsDownloadPriorityForeground=1"

However this didn't solve the issue.

In our case, the existing instance of SQL was on another server, so the command should have been:

"%programfiles%\Update Services\Setup\ExecuteSQL.exe" -S SQLServerName\Instance -d "SUSDB" -Q "update tbConfigurationC set BitsDownloadPriorityForeground=1"

Once the revised command had been run and the WSUS server restarted, the update downloads started automatically.

Where did my custom Word templates go in 2013?

I installed Office 2013 customer preview yesterday and all seemed good. Today I needed to create a new document using one of our company standard templates. I opened Word, went to the new document section and was presented with a list of great looking templates, but not my custom ones. The page suggested I used search to find more templates, it did find more templates from the Internet, but did not find my locally stored ones.

Turns out the issue is that the path to my custom templates had been removed as part of the upgrade (from memory in previous versions of Word this path was always default to your local {USER}AppData\roaming\microsoft\template you did not usually need to set it).

So I opened the options, added the path to my existing templates folder


Restarted Word and tried to create a new document and this time a got a personal tab on the new document page


When I clicked this I got my list of templates, I really must get round to resaving them with previews so they look better in the list


Update 22 Jan 2014: And if you are looking for the Workgroup templates location as opposed to the personal one they are in now set in Word > File > Options > Advanced > In the General section (near the bottom) press the file locations button.

_atomic_fetch_sub_4 error running VS2012RC after Office 2013 Customer Preview is installed

When I installed Office 2013 customer preview all seemed good, loads of new metro look Office features. However when I tried to load my previously working Visual Studio 2012RC I got the error

“The procedure entry point _atomic_fetch_sub_4 could not be located in the dynamic link library devenv.exe”.



This is a known issues with the C++ runtime and a patch was release last week, install this and all should be OK

Cannot access the ‘site setting’ on a reporting services instance using IE

On site recently I had a problem that I could not access the site settings in reporting services if I used Internet Explorer from an client PC. IE worked fine on the server and other browsers were OK on the client, just not IE. Initially I though it was just rights, but that was not the case

Turns out this is down to Kerberos negotiation as discussed in the MSDN article. To fix the issue, on this site where we did not need Kerberos, we just disabled Kerberos negotiation in the [Program files]\Microsoft SQL Server\MSRS10.MSSQLSERVER\Reporting Services\ReportServer\RSreportServer.config file, e.g.

              <RSWindowsNegotiate />
              <RSWindowsKerberos />
              <RSWindowsNTLM />  

If you need Kerberos you need to sort out the SPNs as detail in the MSDN post