But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Error adding a new widget to our BlogEngine.NET 2.8.0.0 server

Background

if you use Twitter in any web you will probably have noticed that they have switched off the 1.0 API, you have to use the 1.1 version which is stricter over OAUTH. This meant the Twitter feeds into our blog server stopped working on the 10th of June. The old call of

http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble

did not work and just change 1 to 1.1 did not work.

So I decided to pull down a different widget for BlogEngine.NET to do the job, choosing Recent Tweets.

The Problem

However when I tried to access our root/parent blog site and go onto the customisation page to add the new widget I got

Ooops! An unexpected error has occurred.

This one's down to me! Please accept my apologies for this - I'll see to it that the developer responsible for this happening is given 20 lashes (but only after he or she has fixed this problem).

Error Details:

Url : http://blogs.blackmarble.co.uk/blogs/admin/Extensions/default.cshtml
Raw Url : /blogs/admin/Extensions/default.cshtml
Message : Exception of type 'System.Web.HttpUnhandledException' was thrown.
Source : System.Web.WebPages
StackTrace : at System.Web.WebPages.WebPageHttpHandler.HandleError(Exception e)
at System.Web.WebPages.WebPageHttpHandler.ProcessRequestInternal(HttpContextBase httpContext)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
TargetSite : Boolean HandleError(System.Exception)
Message : Item has already been added. Key in dictionary: 'displayname' Key being added: 'displayname'

Looking at the discussion forums it seem be had some DB issues.

The Fix

I could see nobody with the same problems, so I pulled down the source code from Codeplex and had a look at the DBBlogProvider.cs (line 2350) where the error was reported. I think the issue is that when a blog site is set ‘Is for site aggregation’, as our root site where I needed to install the new widget is, the SQL query that generates the user profile list was not filtered by blog, so it saw duplicates.

I disabled ‘Is for site aggregation’ for our root blog and was then able to load the customisation page and add my widget.

Interestingly, I then switched back on ‘Is for site aggregation’ and all was still OK. I assume the act of opening the customisation page once fixes the problem.

Update: Turns out this is not the case, after a reboot of my client PC the error returned, must have been some caching that made it work

Also worth noting ….

In case you had not seen it, I hadn’t, there is a patch for 2.8.0.0 that fixes a problem that the slug (the url generate for a post) was not being done correctly, so multiple posts on the same day got group as one. This cause search and navigation issues. Worth installing this if you are likely to write more than one post on a blog a day.

Using SYSPREP’d VM images as opposed to Templates in a new TFS 2012 Lab Management Environment

An interesting change with Lab Management 2012 and SCVMM 2012 is that templates become a lot less useful. In the SCVMM 2008 versions you had a choice when you stored VMs in the SCVMM library. …

  • You could store a fully configured VM
  • or a generalised template.

When you added the template to a new environment you could enter details such as the machine name, domain to join and product key etc. If you try this with SCVMM 2012 you just see the message ‘These properties cannot be edited from Microsoft Test Manager’

image

So you are meant to use SCVMM to manage everything about the templates, not great if you want to do everything from MTM. However, is that the only solution?

An alternative is to store a SYSPREP’d VM as a Virtual Machine in the SCVMM library. This VM can be added as many times as is required to an environment (though if added more than once you are asked if you are sure)

image

This method does however bring problems of its own. When the environment is started, assuming it is network isolated, the second network adaptor is added as expected. However, as there is no agent on the VM it cannot be configured, usually for a template Lab Management would sort all this out, but because the VM is SYSPREP’d it is left sitting at the mini setup ‘Pick your region’ screen.

You need to manually configure the VM. So the best process I have found is

  1. Create the environment with you standard VMs and the SYSPRED’d one
  2. Boot the environment, the standard ready to use VMs get configured OK
  3. Manually connect to the SYSPREP’d VM and complete the mini setup. You will now have a PC on a workgroup
  4. The PC will have two network adapters, neither connected to you corporate network, both are connected to the network isolated virtual LAN. You have a choice
    • Connect the legacy adaptor to your corporate LAN, to get at a network share via SCVMM
    • Mount the TFS Test Agent ISO
  5. Either way you need to manually install the Test Agent and run the configuration (just select the defaults it should know where the test controller is). This will configure network isolated adaptor to the 192.168.23.x network
  6. Now you can manually join the isolated domain
  7. A reboot the VM (or the environment) and all should be OK

All a bit long winded, but does mean it is easier to build generalised VMs from MTM without having to play around in SCVMM too much. 

I think all would be a good deal easier of the VM had the agents on it before the SYSPREP, I have not tried this yet, but that is true in my option of all VMs used for Lab Management. Get the agents on early as you can, just speeds everything up.

Where did my parameters go when I edited that standard TFS report?

I have been doing some editing of the standard scrum TFS 2012 Sprint Burndown report in SQL 2012 Report Builder. When I ran the report after editing the MDX query in the dsBurndown DataSet to return an extra column I got an error:

  • on a remote PC it just said error with dsBurndown dataset
  • on the server hosting reporting services, or in Report Builder, I got a bit more information, it said the TaskName parameter was not defined.

On checking the state of the dataset parameters before and after my edit I could see that the TaskName parameter had been lost

image

Manually re-adding it fixed the problem.

Interestingly which parameters were lost seemed to depend on the MDX query edit I made, I assume something is inferring the parameters from the MDX query.

So certainly one to keep an eye on. I suspect this is a feature of Report Builder, maybe I am better just using trusty Notepad to edit the .RDL file. Oh how I love to edit XML in Notepad

DHCP does not seem to work on Ubuntu for wireless based Hyper-V virtual switches

If running an Ubuntu guest VM on Windows 8 Hyper-V you have a problem if you want to make use of a wireless network on the host machine. DHCP does not seem to work.

Firstly you have to create a virtual switch in Hyper-V

image

and connect it to your wireless card

image

you can then connect a Network Adaptor on the Ubuntu guest VM to the new switch.

image

Now for most operating systems this is all you need to do. The guest VM would use DHCP to pickup an IP address and all is good. However on Ubuntu 12.04 (and other versions judging from other posts), if you are using a virtual switch connected to a wireless card, DHCP does not work. The problem appears to lie in the way Windows/Hyper-V does the bridging to the Wifi.

You have manually set the networking settings. You need to track down the correct network using the MAC address. Remember that as the system is having network problems you might need to enable the connection (with the slider top right of the dialog if using the UI) before you can set the options

image

Once this is all set you should have a working network.

Now some posts suggest that you can avoid this problem if you use a ‘legacy network adaptor’ when you create the VM in Hyper-V, but this did not work for me. In fact even manually setting the IP address did not help on the legacy adaptor.

Lenovo W520 problems with Wifi and Windows 8

My Windows 8 based Lenovo W520 has an Intel Centrino Ultimate-N 6300 Wifi chipset, it has been giving me problems with this for a while.

The most usual problem is that if I sleep or hibernate the PC when I restart it, in a different location, there is a chance I cannot connect to Wifi networks. I can see them, get a limited connection but no IP address as DHCP fails. Sometimes using the hardware Wifi switch on the front left of the PC helps, sometimes switching into Windows 8 airplane mode and back out does, but not always. Often I need to restart the PC.

I have also had problems in Lync video calls, for example on Friday I was having all sorts of problems with a call, it was working for a few minutes then locking. When we swapped to a colleagues supposedly identical W520 all was OK.

So I think I have a problem. Time for some digging.

So I thought it was worth trying newer drivers, I had the default 15.1.x drivers provided by Windows update that are about  12 months old. I got the latest 15.6.x from Lenovo.

Also after reading around the subject I also set my Wifi adaptor to not controlled by power management (right click the network tools tray icon, open network sharing centre, change adaptor settings).

image

At first I thought this was helping, but I found if my PC screen locks due to inactivity I got a blue screen of death when I login again – with a watchdog timer error. So I switched the power save management back on and this seems to have fixed this problem.

So with the new drivers I now get a new set of behaviours.

  • The PC seems to come out of sleep OK
  • However, whilst streaming BBC IPlayer over the weekend to watch the triathlon (well done Non and Jonny) my Wifi link kept dropping and only getting a limited connection as it reconnected. This happened with two different systems, my Netgear N600 based BT ADSL and another on Virgin Cable with one of their Superhub (also Netgear based) at another house. Interestingly once I swapped from the 5Ghz to a 2.4GHz Wifi on my home N600 ADSL system I had no further problems with disconnects. Maybe a router problem here as opposed to the PC? But I did check there were no router firmware updates and no errors reported.

It then occurred to me to think what the different between my PC and my colleagues?  I had a Hyper-V virtual switch configured to use Wifi. I tried deleting this, but it appears to have no effect on the 5Ghhz problem. So maybe a read herring.

So now I think my system is more stable, but only time will tell if it is working well enough. The biggest test will be the Lync based webinar I am doing on DevOps in a couple of weeks.

 

Updated 4 Nov 2013: I have add an offline discussion about this issue, the summary of which is it appears the problem is not that of Lenovo but of Intel and/or Netgear. I seems the laptop is not listening to the instruction it gets properly or the router isn't sending it correctly and this happens on many brands of kit. In some cases the solution is to right click on the wireless connection in use and choose properties change the encryption from aes to tkip wait a few minutes and you are back on the internet, or just a restart of the wifi stack.

 

Getting Wix 3.6 MajorUpgrade working

Why is everything so complex to get going with Wix, then so easy in the end when you get the syntax correct?

If you want to allow your MSI installer to upgrade a previous version then there are some things you have to have correct if you don’t want the ‘Another version of this product is already installed’ dialog appearing.

  • The Product Id should be set to * so that a new Guid is generated each time the product is rebuild
  • The Product UpgradeCode should be set to a fix Guid for all releases
  • The Product Version should increment on of the first three numbers, by default the final build number is ignored for update checking
  • The Package block should not have an Id set – this will allow it to be auto generated
  • You need to add the MajorUpgrade block to you installer

So you end up with

Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:netfx="http://schemas.microsoft.com/wix/NetFxExtension" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension" xmlns:iis="http://schemas.microsoft.com/wix/IIsExtension">
  <Product Id="*" Name="My App v!(bind.FileVersion.MyExe)" Language="1033" Version="!(bind.FileVersion.MyExe)" Manufacturer="My Company" UpgradeCode="6842fffa-603c-40e9-bedd-91f6990c43ed">
    <Package InstallerVersion="405" Compressed="yes" InstallScope="perMachine" InstallPrivileges="elevated"  />

    <MajorUpgrade DowngradeErrorMessage="A later version of [ProductName] is already installed. Setup will now exit." />

……

So simpler than pre Wix 3.5, but still places to trip up

Problem with CollectionView.CurrentChanged event not being fired in a WPF application

Had an interesting issue on one of our WPF applications that is using MVVM Lite.

image

This application is a front end to upload and download folder structures to TFS. On my development PC all was working fine i.e. when we upload a new folder structure to the TFS backend the various combo’s on the download tab are also updated. However, on another test system they were not updated.

After a bit of tracing we could see in both cases the RefreshData method was being called OK, and the CollectionViews  recreated and bound without errors.

private void RefreshData()
        {
            this.dataService.GetData(
                (item, error) =>
                {
                    if (error != null)
                    {
                        logger.Error("MainViewModel: Cannot find dataservice");
                        return;
                    }

                    this.ServerUrl = item.ServerUrl.ToString();
                    this.TeamProjects = new CollectionView(item.TeamProjects);
                    this.Projects = new CollectionView(item.Projects);
                });

            this.TeamProjects.CurrentChanged += new EventHandler(this.TeamProjects_CurrentChanged);

            this.TeamProjects.MoveCurrentToFirst();
          }

So what was the problem? To me it was not obvious.

It turned out it was that on my development system the TeamProject CollectionView contained 2 items, but on the test system only 1.

This means that even though we had recreated the CollectionView and rebound that data and events the calling of MoveCurrentToFirst (or any other move to for that matter) had no effect as there was no place to move to in a collection of one item. Hence the changed event never got called and this in turn stopped the calling of the methods that repopulated the other combos on the download tab.

The solution was to add the following line at the end of the method, and all was OK

            this.TeamProjects.Refresh();

Setting up a TFS 2012 proxy in a cross domain system

Today I have been setting up a cross domain TFS proxy. The developers are in one domain and the TFS server in another. Given there is no trust between these domains you have use a trick to get it to work.

So I created a local user tfsproxy.local on both the TFS server and proxy with the same password on each. At the proxy end I made this local user a local admin.

Next I ran the TFS 2012.2 wizard setting the proxy account  to the tfsproxy.local user. It all passed verification, but then I got an error

TF400371: Failed to add the service account 'TFSPROXY\TFSProxy.local' to Proxy Service Accounts Group. Details: TF14045: The identity with type 'System.Security.Principal.WindowsIdentity' and identifier 'S-1-5-21-4198714966-1643845615-1961851592-1024' could not be found..

It seems this is a known issue with TFS2012. It is meant to be fixed in TFS2012.3, so I pulled down the ’go live’ CTP and installed this on the proxy. It made no difference, I assumed it actually needs to be installed on the server end and not just the proxy as this is where the user lookup occurs. However, I did not access to do that upgrade today.

I was about to follow the workaround of removing the proxy from the domain, configuring it and then putting it back. But I then had an idea; the step it was failing on was granting rights, so I did it manually. On the TFS server end I added the tfsproxy.local user to the ‘Proxy Service Accounts Group’. Once this was done the configuration completed without error.

A quick test showed the proxy was working as expected.

Accessing TFS work item tags via the API

With TFS 2012.2 Microsoft have added tags to work items. These provide a great way to add custom information to work items without the need to customise the process template to add custom fields. This is important for users of the hosted http://tfs.visualstudio.com as this does not, at this time, allow any process customisation.

It is easy to add tags to any work item via the TFS web client, just press the Add.. button and either select an existing tag or add a new one. In the following PBI work item example I have added two tags, Tag1 and Tag2.

image

However, the problem with tags, at present, is that they can only be used as filters within the result of a work item query in the web client, as shown below.

image

They are not available inside work item queries and are not published to the TFS warehouse/cube for reporting purposes. Hopefully these limitations will be addressed in the future, but not today.

Given all this, I was recently asked by a client if they could use tags to mark PBI work items scheduled for a given release with a view to using this information to produce release notes. Obviously given the current limitations this cannot be done via work item queries or reporting, but you can use the TFS 2012.2 API to do this easily in .NET or Java.

The tags are stored as a ; separated list in a string field property. In C# there is a property in the API to get the tags …

using System;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;
using System.Linq;

namespace BlackMarble
{
    public class TFSDemo
    {
        public static string[] GetTagsForWorkItem(Uri tfsUri, int workItemId)
        {
            // get a reference to the team project collection
            using (var projectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(tfsUri))
            {
                // get a reference to the work item tracking service
                var workItemStore = projectCollection.GetService<WorkItemStore>();

                // and get the work item
                var wi = workItemStore.GetWorkItem(workItemId);
                return wi.Tags.Split(';');
            }
        }
    }
}

but in Java you have to get the field yourself …

import java.net.URI;
import java.net.URISyntaxException;

import com.microsoft.tfs.core.TFSTeamProjectCollection;
import com.microsoft.tfs.core.clients.workitem.WorkItem;
import com.microsoft.tfs.core.clients.workitem.WorkItemClient;
import com.microsoft.tfs.core.httpclient.Credentials;
import com.microsoft.tfs.core.httpclient.DefaultNTCredentials;

public class TFSDemo {
    
      public static String[] GetTagsForWorkItem(URI tfsUri, int workItemId) 
      {
          // get a reference to the team project collection
          Credentials credentials = new DefaultNTCredentials();
         
          TFSTeamProjectCollection projectCollection = new TFSTeamProjectCollection(tfsUri, credentials);
         
          // get a reference to the work item tracking service
          WorkItemClient wic = projectCollection.getWorkItemClient();
         
          // get the work item and return the tags
          WorkItem wi = wic.getWorkItemByID(workItemId);
         
          // there is no method for the tags, but can pull it out of the fields
          return wi.getFields().getField("Tags").getValue().toString().split(";");
      }

}
        

Given these methods it is possible to write a tool that can select matching work items. Thus allowing you generate any output you require.

Update 14 May 2013

Just had confirmed that at present there is no API to write tags, I had not tried, I only need a read only solution. Keep an eye open for future releases of the SDKs to get a write call method.

Workaround for TF900546: An unexpected error occurred while running the RunTests activity

The problem

I have been working on a project that contains SharePoint 2010 WSP packages and a MSI distributed WPF application. These projects are all in a single solution with their unit tests. I have been getting a problem with our TFS 2012.2 build system, all the projects compile but at the test point I get the unhelpful

TF900546: An unexpected error occurred while running the RunTests activity: 'Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.'.

If I remote onto the build box and loaded the solution in Visual Studio (which was luckily installed on the build box) and tried to run the test in the test explorer I got

image

and the event log showed

Application: vstest.executionengine.x86.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.InvalidProgramException
Stack:
   at System.ServiceModel.ServiceHost..ctor(System.Object, System.Uri[])
   at Microsoft.VisualStudio.TestPlatform.TestExecutor.TestExecutorMain.Run(System.String[])
   at Microsoft.VisualStudio.TestPlatform.TestExecutor.ServiceMain.Main(System.String[])

and

Faulting application name: vstest.executionengine.x86.exe, version: 11.0.60315.1, time stamp: 0x5142b4b6
Faulting module name: KERNELBASE.dll, version: 6.1.7601.18015, time stamp: 0x50b83c8a
Exception code: 0xe0434352
Fault offset: 0x0000c41f
Faulting process id: 0x700
Faulting application start time: 0x01ce4663824905bd
Faulting application path: C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO 11.0\COMMON7\IDE\COMMONEXTENSIONS\MICROSOFT\TESTWINDOW\vstest.executionengine.x86.exe
Faulting module path: C:\Windows\syswow64\KERNELBASE.dll
Report Id: c2689d15-b256-11e2-80aa-00155d0a5201

Next I tried creating in a simple new test project with one unit test, this failed with the same error.

As all the tests work locally on my development PC it was all pointing to a corrupted installed of Visual Studio (and/or the components installed as part of TFS build) on the build box. It should be noted that this was a build box with a good number of additional packages installed to support SharePoint, so patching order could be an issue.

The workaround

Robert, another of our ALM consultants, said he had seen a similar problem at client and suggested changing the test runner.

So in the build definition > process > Basic > Automated Tests > I edited the test run settings and changed to the MSTest VS2010 runner from the default.

image

Once this was done my tests then ran. However I then got a publishing error

API restriction: The assembly 'file:///C:\Builds\2\BM\CTAppBox.Main.CI\Binaries\_PublishedWebsites\WebServiceTestClient\bin\WebServiceTestClient.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain.

The problem was the build was set to the default test search criteria of *test*. This meant it picked a project it should not have, due to a poor naming convention. As soon as I changed the filter to *.tests all was OK.

I did retry the VS2012 test runner after fixing this naming issue, it had no effect.

I know I need to sort (rebuild) this build box, but now is not the time, I have a working solution that will do for now