But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

High CPU utilisation on the data tier after a TFS 2010 to 2013 upgrade

There have been significant changes in the DB schema between TFS 2010 and 2013. This means that as part of an in-place upgrade process a good deal of data needs to be moved around. Some of this is done as part of the actual upgrade process, but to get you up and running quicker, some is done post upgrade using SQL SPROCs. Depending how much data there is to move this can take a while, maybe many hours. This is the cause the SQL load.

A key factor as to how long this takes is the size of your pre upgrade tbl_attachmentContent table, this is where amongst other things test attachments are stored. So if you have a lot of test attachments it will take a while as these are moved to their new home in tbl_content.

If you want to minimise the time this takes it can be a good idea to remove any unwanted test attachments prior to doing the upgrade. This is done with the test attachment cleaner from the appropriate version of TFS Power Tools for your TFS server. However beware that if you don’t have a suitably patched SQL server there can be issues with ghost files (see Terje’s post).

If you cannot patch your SQL to a suitable version to avoid this problem then it is best to clean our old test attachments only after the while TFS migrate has completed i.e. wait until the high SQL CPU utilisation caused by the SPROC based migration has completed. You don’t want to be trying to clean out old test attachments at the same time TFS is trying to migrate them.

Upgraded older Build and Test Controllers to TFS 2013

All has been going well since our upgrade from TFS 2012 to 2013, no nasty surprises.

As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.

So we created a new pair of Windows 2012R2 domain joined server VMs, on  one we installed a test controller and on the build controller and a single build agent

Note: I always tend to favour a single build agent per VM, usually using a single core VM. I tend to find most builds are IO locked not CPU locked so having more smaller VMs, I think, tends to be easier to manage at the VM hosting resource level.

Test Controller

Most of the use of our Test Controller is as part of our TFS Lab Management environments. If you load MTM 2013 you will see that it cannot manage 2012 Test Controller, they appear as off line. Lab Management is meant to keep the test agents upgraded, so it should upgrade an agent from one point release to another e.g. 2012.3 to 2012.4. However, this upgrade feature does not extend to 2012 to 2013 major release upgrades. Also I have always found the automated deployment/upgrade of test agents as part of an environment deployment is problematic at best. You often seem to suffer DNS and timeout issues. Easily the most reliable method is to make sure the correct (or at least compatible) test agents are installed on all the environment VMs prior to their configuration at the deployment/restart stage.

Given this the system that seems to work for getting environment’s test agents talking to the new 2013 Test Controller is:

  1. In MTM stop the environment
  2. Open the environment settings and change the controller to the new one
  3. Restart the environment, you will see the VMs show as not ready. The test agents won’t configure.
  4. Connect to each VM
    1. Uninstall the 2012 Test Agent
    2. install the 2013 Test Agent
  5. Stop and restart the environment and all should work – with luck the VMs will configure properly and show as ready
  6. If the don’t
    1. Try a second restart, sometimes sorts it
    2. You can try a repair, re-entering the various password.
    3. Updated 5 Feb 2014 Found I always need to do this. If problem really persist try running  Run the Test Agent Configuration tool on each VM, press next,next, next etc. and it will try to configure. It will probably fail, but hopefully it will have done enough port opening etc. to allow the next environment restart to work correctly
    4. If it still fails you need to check the logs, but suspect a DNS issue.

Obvious you could move step 4 to the start if you make the fair assumption it is going to need manual intervention

Build Controller

Swapping your build over from 2012 to 2013 will have site specific issues. It all depends what build activities you are using. If they are bound to TFS 2012 API they may not work unless you rebuild them. However from my first tests I have found my Build 2012 processes template seem to work I, this is whether I set my  build controller ‘custom assemblies path’ to either my 2012 DLL versions or their 2013 equivalents. So .NET is managing to resolve usable DLLs to get the build working.

Obviously there is still more to do here, checking all my custom build assemblies, maybe a look at revising the whole build scripts to make use to 2013 features, but that can wait.

What I have now allows me to upgrade our Windows 8.1 build agent VM so it can connect to our 2013 Build Controller. Thus allowing use to run full automated builds and tests of Windows 8.1 application. Up to now with TFS 2012 we had only been able to the basic build working due to having to hack i the build process as you need Visual Studio 2013 generation tools to fully build and test Windows 8.1 applications.

 

So we are going to have 2012 build and test controllers around for  while, but we have provided the migration is not going to be too bad. Maybe just needs a bit of thought over some custom build assemblies.

How long is my TFS 2010 to 2013 upgrade going to take?

Update 27 Jun 2013 See update version of post with more data

I seem to be involved with a number of TFS 2010 to 2013 upgrades at present. I suppose people are looking at TFS 2013 in the same way as they have historically looked at the first service pack for a product i.e: the time to upgrade when most of the main issues are addressed. That said TFS 2013 is not TFS 2012 SP1!

A common question is how long will the process take to upgrade each Team Project Collection? The answer is that it depends, a good consultants answer. Factors include the number of work items, size of the code base, number of changesets, volume of test results and the list goes on.  The best I have been able to come up with is to record some timings of previous upgrades and use this data to make an educated guess. 

In an upgrade of a TPC from TFS 2010 to 2013 there are 793 steps to be taken. Not all these take the same length of time, some are very slow as can be seen in the chart. I have plotted the points where the upgrade seems to pause the longest. These are mostly towards the start of the process where I assume the main  DB schema changes are being made

image

To give some more context

  • Client C was a production quality multi tier setup and took about 3 hours to complete.
  • Client L, though with a a similar sized DB to Server A, was much slower to upgrade, around 9 hours. However, it was on a slower single tier test VM and also had a lot of historic test data attachments (70%+ of the DB contents)
  • Demo VM was my demo/test TFS 2010 VM, this had 4 TPCs, the timing are for the largest of 600Mb. In reality this server had little ‘real’ data. It is also interesting to note that though there were four TPCs the upgrade did three in parallel and when the first finished started the fourth. Worth remembering if you are planning an upgrade of many TPCs.

Given this chart, if you know how long it takes to get to Step 30 of 793 you can get an idea of which of these lines closest matches your system.

I will continue to update this post as I get more sample data, hope it will be of use to others to gauge how only upgrades may take, but remember your mileage may vary.

Fix for intermittent connection problem in lab management – restart the test controller

Just had a problem with a TFS 2012 Lab Management deployment build. It was working this morning, deploying two web sites via MSDeploy and a DB via a DacPac, then running some CodedUI tests. However, when I tried a new deployment this afternoon it kept failing with the error:

The deployment task was aborted because there was a connection failure between the test controller and the test agent.

image

If you watched the build deployment via MTM you could see it start OK, then the agent went off line after a few seconds.

Turns out the solution was the old favourite, to reboot of the Build Controller. Would like to know why it was giving this intermittent problem though.

Update 14th Jan An alternative solution to rebooting is to add a hosts file entry on the VM running the test agent for the IP address of the test controller. Seems the problem is name resolution, but not sure why it occurs

Fix for Media Center library issue after Christmas tree lights incident

Twas the night before Christmas and….

To cut a long story short the PC that runs my Window Media Center (MCE) got switched on and off at the wall twice whilst Christmas tree lights were being put up.

Now the PC is running WIndows 8.1 on modern hardware, so it should have been OK, and mostly was. However I found a problem that MCE was not showing any music, video or pictures in its libraries but the recorded TV library was fine. I suspected the issue was that my media is on an external USB3 RAID unit, so there was a chance that on one of the unintended reboots the drives had not spun up in time and MCE had ‘forgotten’ about the external drive.

So I tried to re-add the missing libraries via MCE > Tasks > Settings > Media Libraries. The wizard ran OK allowing me to select the folders on the external disk, but when I got to the end the final dialog closed virtually instantly. I would normally have expected it to count up all the media files as they were found. Also if I went back into the wizard I could not see the folder I had just added.

A bit of searching on the web told me that MCE shares its libraries with Windows Media Player, and there was a a good chance they were corrupted. In fact running the Windows Media Player trouble-shooter told me as as much. So I deleted the contents of %LOCALAPPDATA%\Microsoft\Media Player folder as suggested. It had no useful effect on the problem. The only change was the final dialog in the wizard did appear to count the media files it found now, taking a few minutes before it closed. But the results of the scan were not saved.

So I switched my focus to Media Player (WMP). I quickly saw this was showing the same problems. If I selected WMP > Organise > Manage libraries no dialog was shown for music, video or pictures. However the dialog did appear for Recorded TV which we know was working in MCE.

image

Also if I selected WMP > Organise > Options… > Rip Music, there was no rip location set, and you could not set it if you pressed the Change button.

image

The web quickly showed me I was not alone in this problem, as shown in this post and others on the Microsoft forums. It is worth noting that this thread, and the others, do seem to focus on Windows 7 or Vista. Remember I was on a PC that was a new install of Windows 8 and in place upgraded to 8.1 via the Windows Store, but I don’t think was the issue.

Anyway I tried everything I could find the posts

  • Restarted services
  • Deleted the WMP databases (again)
  • Uninstalled and re-install WMP via the WIndows Control panel > Install Products > Windows feature
  • Checked the permissions on folder containing the media

Everything seemed to point to a missing folder. The threads talked about WMP being set to use a Rip folder that it could not find. As my data was on an external RAID this seemed reasonable. However on checking [HKEY_CURRENT_USER\Software\Microsoft\MediaPlayer\Preferences\HME\LastSharedFolders] there were no paths that could not be resolved.

So I decided to have a good look at what was going on under the covers with Sysinternals Procmon, but could see nothing obvious, no missing folders, not registry key calls missed.

In the end the pointer to the actual fix was on page 8 of the thread by Tim de Baets. Turns out the issue was with the media libraries in C:\Users\<your username>\AppData\Roaming\Microsoft\Windows\Libraries. If I tried to a open any of these in Windows Explorer I got an error dialog in the form 'Music-library-ms' is not longer working. So I deleted the Pictures, Music and Video library folders in C:\Users\<your username>\AppData\Roaming\Microsoft\Windows\Libraries, which was not a problem as they were all empty.

When I reloaded WMP I could now open the WMP > Organise > Manage libraries dialogs and re-add the folders on my RAID disk, also I could set the Rip folder.

As these settings were shared with MCE my problem was fixed, ready for a Christmas of recording TV, looking at family photos and playing music.

Whether it was the power outages that caused the problem, I have my doubts, as power cuts have not been an issue in the past. maybe it is some strange permission hangover from the upgrade from Windows 8 > 8.1 I doubt I will ever find out.

Getting the domain\user when using versionControl.GetPermissions() in the TFS API

 

If you are using the TFS API to get a list of user who have rights in a given version control folder you need to be careful as you don’t get back the domain\user name you might expect from the GetPermissions(..) call. You actually get the display name. Now that might be fine for you but I needed the domain\user format as I was trying to populate a peoplepicker control.

The answer is you need to make a second call to the TFS IIdentityManagementService  to get the name in the form you want.

This might not be best code, but shows the steps required

private List<string> GetUserWithAccessToFolder(IIdentityManagementService ims, VersionControlServer versionControl, string path)
{
    var users = new List<string>();
    var perms = versionControl.GetPermissions(new string[] { path }, RecursionType.None);
    foreach (var perm in perms)
    {
        foreach (var entry in perm.Entries)
        {
                var userIdentity = ims.ReadIdentity(IdentitySearchFactor.DisplayName,
                                                        entry.IdentityName,
                                                        MembershipQuery.None,
                                                        ReadIdentityOptions.IncludeReadFromSource);

                users.Add(userIdentity.UniqueName);
          }
    }

    return users;
}

A hair in the gate

My Arc mouse started behaving strangely today, very jumpy. Felt like the cursor was being pulled left. Turns out the problem was a tiny hair caught in the led sensor slot

image

You could see there was a problem as the led was flashing a lot, when it is normally solidly on if turn over the mouse you look into the slot.

Once I got it out all was fine again

Fix for 0xc00d36b4 error when play MP4 videos on a Surface 2

Whilst in the USA last week I bought a Surface 2 tablet. Upon boot it ran around 20 updates, as you expect, but unfortunately one of these seemed to remove its ability to play MP4 videos, giving a 0xc00d36b4 error whenever you try. A bit of a pain as one of the main reasons I wanted a tablet was for watching training videos and PluralSight on the move.

After a fiddling and hunting on the web I found I was not alone, so I added my voice to the thread, and eventually an answer appeared. It seems the Nvidia Audio Enhancements seem to be the problem. I guess they got updated within the first wave of updates.

So the fix is according to the thread is as follows

  1. Go to the desktop view on your Surface
  2. Tap and hold the volume icon. 
  3. Select sounds from the pop up menu - I only had to go this far as a dialog appeared asking of I wished to disable audio enhancements (maybe it found it was corrupt)
  4. Go to the playback tab
  5. Highlight the speakers option
  6. Select properties
  7. Go to the enhancements tab
  8. Check the "Disable all enhancements" box
  9. Tap OK.

And videos should now play

Updated 2 Dec  2013 Seems you have to make this change for each audio device, this means speaker AND headphones

Fixing a WCF authentication schemes configured on the host ('IntegratedWindowsAuthentication') do not allow those configured on the binding 'BasicHttpBinding' ('Anonymous') error

Whilst testing a WCF web service I got the error

The authentication schemes configured on the host ('IntegratedWindowsAuthentication') do not allow those configured on the binding 'BasicHttpBinding' ('Anonymous'). Please ensure that the SecurityMode is set to Transport or TransportCredentialOnly. Additionally, this may be resolved by changing the authentication schemes for this application through the IIS management tool, through the ServiceHost.Authentication.AuthenticationSchemes property, in the application configuration file at the <serviceAuthenticationManager> element, by updating the ClientCredentialType property on the binding, or by adjusting the AuthenticationScheme property on the HttpTransportBindingElement.

Now this sort of made sense as the web services was mean to be secured using Windows Authentication, so the IIS setting was correct, anonymous authentication was off

image

Turns out the issue was, as you might expect, an incorrect web.config entry

  <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding name="windowsSecured"> <!—this was the problem –>
          <security mode="TransportCredentialOnly">
            <transport clientCredentialType="Windows" />
          </security>
        </binding>
      </basicHttpBinding>
  </bindings>
    <services>
      <service behaviorConfiguration="CTAppBox.WebService.Service1Behavior" name="CTAppBox.WebService.TfsService">
        <endpoint address="" binding="basicHttpBinding"  contract="CTAppBox.WebService.ITfsService">
          <identity>
            <dns value="localhost"/>
          </identity>
        </endpoint>
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
      </service>
    </services>
    <behaviors>
      <serviceBehaviors>
        <behavior name="CTAppBox.WebService.Service1Behavior">
          <!-- To avoid disclosing metadata information, set the value below to false before deployment -->
          <serviceMetadata httpGetEnabled="true"/>
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
          <serviceDebug includeExceptionDetailInFaults="true"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>

The problem was the basicHttpBinding had a named binding windowsSecured and no non-named default. When the service was bound to the binding it did not use the name binding, just the defaults (which were not shown in the config file).

The solution was to remove the name="windowsSecured" entry, or we could have added a name to the service binding

When your TFS Lab test agents can’t start check the DNS

Lab Management has a lot of moving parts, especially if you are using SCVMM based environments. All the parts have to communicate if the system is work.

One of the most common problem I have seen are due to DNS issues. A slowly propagating DNS can cause chaos as the test controller will not be able to resolve the name of the dynamically registered lab VMs.

The best fix is to sort out your DNS issues, but that is not always possible (some things just take the time they take, especially on large WANs).

An immediate fix is to use the local host files on the test controller to define IP address for the lab[guid].corp.domain names created when using network isolation. Once this is done the handshake between the controller and agent is usually possible.

If it isn’t then you are back to all the usually diagnostics tools