But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

‘Showing a modal dialog box or form when the application is not running in UserInteractive mode’ error upgraded to TFS build extensions

Whilst upgrading a TFS 2010 build today to the new 1.2 release of the Community TFS Build Extensions we hit an issue. All seemed to go OK until the build tried to use the StyleCop activity, which failed with the error

Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application.

After a bit of pointless fiddling we decided the only option was to set the build service in question to run interactively (set on the build service properties in TFS administration console on the build box). Once this was done the following dialog popped up


On checking the assemblies copied into the CustomAssemblies folder referenced by the build controller we found we had an older version of this file (from the previous release of the build extensions).

Once we replaced this file we got a bit further, we did not get a dialog, but the build failed with the error in the log

Error: Could not load file or assembly 'StyleCop, Version=, Culture=neutral, PublicKeyToken=f904653c63bc2738' or one of its dependencies. The system cannot find the file specified.. Stack Trace:    at TfsBuildExtensions.Activities.CodeQuality.StyleCop.Scan()    at TfsBuildExtensions.Activities.CodeQuality.StyleCop.InternalExecute() in D:\Projects\teambuild2010contrib\CustomActivities\VS2010\MAIN\Source\Activities.StyleCop\Stylecop.cs:line 134    at TfsBuildExtensions.Activities.BaseCodeActivity.Execute(CodeActivityContext context) in D:\Projects\teambuild2010contrib\CustomActivities\VS2010\MAIN\Source\Common\BaseCodeActivity.cs:line 67.

The issue was we had not upgraded the StyleCop assemblies in the CustomAssemblies folder to match the ones the release of the build extensions was built against (it needed 4.6.30, note not the latest 4.7.x.x.). So we changed these files to the release and the build worked

Interestingly note that the file names have changed from the 4.4.x.x. to 4.6.x.x release of StyleCop from Microsoft.StyleCop.*.dll to just StyleCop.*.dll, so make sure you delete the old files in the CustomActivities folder to avoid confusion.


To the top tip here is to make sure you update all of the assemblies involved in your build to avoid dependency issues.

TF260073 incompatible architecture error when trying to deploy an environment in Lab Manager

I got a TF260073, incompatible architecture error when trying to deploy a new virtual lab environment using a newly created VM and template. I found the fix in a forum post.

The issue was that when I had build the VMs, I had installed the Lab Management agents using a VMprep DVD ISO and mounted it using ‘share image instead of copying it’ option. This as the name implies means the ISO is mount from a share not copied to the server running the VM, this save time and disk resources. When I had stored my VM into the SCVMM Library I had left this option selected i.e the VMPrep.iso mounted. All I had to do to fix this issue was open the settings of the VM stored in the SCVMM Library and dismount the ISO, as shown below


Interestingly the other VM I was using in my environment was stored as template and did not suffer this problem. When creating the template I was warning that it could not be created if an ISO was mounted in this manner. So the fact I had a problem with my VM image should not have been a surprise.

Getting a ‘File Download’ dialog when trying to view TFS build report in Eclipse 3.7 with TEE

When using TEE in Eclipse 3.7 on Ubuntu 11.10 there is a problem trying to view a TFS build report. If you click on the report in the Build Explorer you would expect a new tab to open and the report be shown. This is what you see in Eclipse on Windows and on older versions of Eclipse on Linux. However on Ubuntu 11.10 with Eclipse 3.7 you get a File Download dialog.


I understand from Microsoft this is a known issue, thanks again to the team for helping get to the bottom of this.

The problem is due to how Eclipse manages its internal web browser. Until version 3.7 it used the Mozilla stack (which is still the stack used internally by TEE for all its calls), but with Eclipse 3.7 on Linux it now uses WebKit as the stack to open request URL such as the build report. For some reason this is causing the dialog to be show.

There are two workaround:

Set Eclipse to use an external browser

In Eclipse –> Windows –> Preference, select use external browser



When you now click on the build details an external browser is launched showing the results you would expect.



Switch Eclipse back to using Mozilla as its default

You can switch Eclipse back to using mozilla as its default. In your eclipse.ini set


Once this is done Eclipse should behave as expected, opening a tab to show the build report within Eclipse.



Problems finding XULRunner when running TEE11 CTP1 on Ubuntu and connecting to TFS Azure – a solution

I recently got round to taking a look at Team Explorer Everywhere 11 CTP1. This is the version of TEE that allows you to access the Azure hosted preview of the next version of TFS using Eclipse as a client. I decided to start with  a clean OS so

  1. Downloaded the Ubuntu 32bit ISO
  2. Used this ISO to create a test VM on my copy of VirtualBox (currently using VirtualBox as this allows me to create 64bit and 32bit guest VMs on my Windows 7 laptop without have to reboot  to my dual boot Windows 2008 partition to access Hyper-V)
  3. Selected default installation options for Ubuntu
  4. When completed used the Ubuntu Software Centre tool to install Eclipse 3.7
  5. Downloaded the Team Explorer Everywhere 11 CTP1 and installed the Eclipse plug as detailed on the download page.
  6. Once installed I then tried to connect to our in house TFS2010 server from with Eclipse – it all worked fine

I next tried to connect to my project collection on https://tfspreview.com and this is where I hit a problem….

Instead of getting the expected LiveID login screen I got an error dialog saying ‘No more handles [Could not detect registered XULRunner to use]


A quick search showed this is a known issue, basically Ubuntu has stopped distributing XULRunner. It needs to be installed manually as detailed in the post. Problem was, unlike in the post, when I followed this process it had no effect on the problem, so time for more digging with the excellent assistance of Shaw from the TEE team at  Microsoft.

The first suspect was that an environment variable MOZILLA_FIVE_HOME, which, according  to the SWT FAQ, needed to be set to let Eclipse know where to find XULRunner. Checking the Eclipse Help->Team Explorer Support… dialog


seemed to show the correct setting had been picked up automatically. So as expected, on setting the environment variable it had no effect on the problem. So just to make sure I set the variable in eclipse.ini file using the setting


This changed the error message, and gave the hint to real problem.


XULRunner was failing to load as other dependencies were missing.

At this point I could have started to chase down all these dependencies. However, I realised the issue was the Ubuntu distribution of Eclipse, it just had too many bits missing that you need to login to TFS Azure. So I removed the Ubuntu sourced Eclipse installation and downloaded the current version of  Eclipse direct for the Eclipse home site.

  1. I unzipped this distribution
  2. Installed TEE CTP1 as before
  3. Check I could access our TFS 2010
  4. And checked I could login via http://tfspreview.com


So success, the tip being using the official Eclipse distribution, as you never know what another distribution might have removed.

Moving Environments between TPCs when using TFS Lab Management


One area I think can be found confusing in TFS Lab Management is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.



This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.

This problem effects companies like ourselves as we have many TPCs because we tend to have one per client, an arrangement not that uncommon for consultancies.

It is not just in Lab Management this is an issue for us. The isolated nature of TPCs, a great advantage for client security, has caused us to have an ever growing number of Build Controllers and Test Controllers which were are regularly reassigning to whichever are our active TPCs. Luckily multiple the Build Controller can be run on the same VM (I discussed this unsupported hack here), but unfortunate there is no similar workaround for Test Controllers.

MTM is not your friend when  storing environments for use beyond the current TP

What I want to discuss in this post is how, when you have a working environment in one TP you can get it into another TP with as little fuss as possible.

Naively you would think that you use the Store in Library option within MTM that is available for a stopped environment.


This does store the environment on the SCVMM Library, but it is only available for the TP that is was stored from. It is stored in the A1 silo in the SCVMM Library. Now you might ask why, the SCVMM Library is just a share, so anything in it should be available to all? But it turns out it is not just a share. It is true the files are on a UNC share, you can see the stored environments as a number of Lab_[guid] folders, but there is also a DB that stores meta data, this is the problem. This meta data associates the stored environment with a given TP.

The same is true if you choose to just store a single VM from within MTM whether you choose to store it as a VM or a template.

Why is this important you might ask? Well it is all well and good you can build your environment from VMs and templates in the SCVMM Library, but these will not be fully configured for your needs. You will build the environment, making sure TFS agents are in place, maybe putting extra applications, tools or test data on system. It is all work you don’t want to have to repeat for what is in effect the same environment in another TP or TPC. This is a problem we see all the time. We do SharePoint development so want a standard environment (couple of load balanced servers and a client) we can use for many client projects in different TPCs  (Ok VM factory can help, but this is not my point here).

A workaround of sorts

The only way I have found to ease this problem is when I have a fully configured environment to clone the key VMs (the servers) into the SCVMM Library using SCVMM NOT MTM

  1. Using MTM stop the environment you wish to work with.
  2. Identify the VM you wish to store, you need its Lab name. This can be found in MTM if you connect to the lab and check the system info for the VM

  3. Load SCVMM admin console, select Virtual Machines tab and find the correct VM

  4. Right click on the VM and select Clone
  5. Give the VM as new meaningful name e.g. ‘Fully configured SP2010 Server’
  6. Accept the hardware configuration (unless you wish to change it for some reason)
  7. IMPORTANT On the destination tab select the to ‘store the virtual machine in the library’. This appears to be the only means to get a VM into the library such that it can be imported into any TPC/TP.

  8. Next select the library share to use
  9. And let the wizard complete.
  10. You should now have a VM in the SCVMM Library that can be imported into new environments.


You do have to at this point recreate the environment in your new TP, but at least the servers you import into this environment are configured OK. If for example you have a pair of SP2010 servers, a DC and a NLB, as long as you drop them into a new isolated environment they should just leap into life as they did before. You should not have to do any extra re-configuration.

The same technique could be used for workstation VMs, but it might be as quick to just use template (sysprep’d) clients. You just need to take a view on this for your environment requirements

When you try to run a test in MTM you get a dialog ‘Object reference not set to an instance of an object’

When trying to run a newly created manual test in MTM I got the error dialog

‘You cannot run the selected tests, Object reference not set to an instance of an object’.



On checking the windows event log I saw

Detailed Message: TF30065: An unhandled exception occurred.

Web Request Details Url: http://……/TestManagement/v1.0/TestResultsEx.asmx

So not really that much help in diagnosing the problem!

Turns out the problem was I had been editing the test case work item type. Though it had saved/imported without any errors (it is validated during these processes) something was wrong with it. I suspect to do with filtering the list of users in the ‘assigned to’ field as this is what I last remember editing, but I might be wrong, it was on a demo TFS instance I have not used for a while.

The solution was to revert the test case work item type back to a known good version and recreate the failing test(s). Its seems once a test was created from the bad template there was nothing you could do about fixing it.

Once this was done MTM ran the tests without any issues.

When I have some time I will do an XML compare of the exported good and bad work item types to see what the problem really was.

The battle of the Lenovo W520 and projectors

My Lenovo W520 is the best laptop I have owned, but I have had one major issue with it, external projectors. The problem is it does not like to duplicate the laptop screen output to a projector, it works fine if extending the desktop, not duplicating.

Every time I have tried to use it with a projector I either end up only showing on the projector and looking over my shoulder, or fiddling for ages until it suddenly works, usually at a low resolution, I don’t know what I did to get to this point so I don’t dare fiddle any more so use it anyway. A bit of a problem given the number of presentations I do. A quick search shows I am not alone in this problem.

The issue it seems is down to the fact the Lenovo has two graphics systems, an integrated (Intel) one and a discrete (Nvidia) one. The drivers in Windows 7 allow it to switch dynamically between the two to save power. This is called Nvidia Optimus Switching.

The answer to the problem is to disable this Optimus feature in the BIOS, this is at the cost of some battery life, but better to have a system that works as I need and have to plug it in, than does not work at most client sites.

So to make the change

  1. Reboot into BIOS (press the ThinkVantage button)
  2. Select the Discrete graphics option (the Nvidia 1000M)
  3. Disable the Opitmus features
  4. Save and Reboot
  5. Windows 7 re-detects all the graphics drivers and then all seems OK (so far…)

On more point it is worth noting I again fell for the problem that as my WIndows 7 partition is BitLockered you have to enter your recovery key if you change anything in the BIOS, see my past post for details of how to fix this issue. Was a bit surprised by this as I thought BitLocker would only care about changes to the master boot record, but you live and learn.

Error 0x80070490 when trying to make any purchase on WP7 MarketPlace

Recently I have had a problem with my LG E900 Windows Phone 7 running Mango. Whenever I try to make a purchase on marketplace I was getting the error “There has been a problem completing your request. Try again later” and seeing the error code 0x80070490. A search on the web, asking around everyone I thought might have an answer and placing a question on Microsoft Answers got me no where.

The phone had been working fine until a few days ago. The problem started when I tired to run the Amazon Kindle app (now my primary platform for reading, yes decided not to buy an actual Kindle at this point), this failed to start, it just kept returning to the phone home page. A power cycle of the phone had no effect. I have seen this before and fixed it with a remove and re-install of the app. However, though the remove was fine, whenever I try to reinstall I got the 0x80070490 error.

I tried installing another WP7 application (not a reinstall) but I got the same error.

As this is a development phone I was able to try to deploy an app XAP file I created from my PC. This worked without a problem.

I checked my account in Zune, I could login and see the applications I have purchased in the past, so I suspected the issue was corruption of the local catalogue on the phone, but I had no way to prove it.

At this point I was out of ideas so did a reset to factory settings on the phone. This was a bit of pain as my phone is one of the ones form the PDC last year, which Microsoft sourced in Germany. So it was off to Google Translate to help me through enough German screens to set the language to English.  But on the plus side I have learnt ‘notruf’ is German for ‘emergency call’.

So I had to

  • Sync with Zune to get my data off the phone
  • Factory reset (Settings|About)
  • Set to English
  • Reinstall Apps I had previous purchased
  • Re-Sync with Zune and put back any music, podcasts etc.
  • Set the APN (Setting|Mobile Network) as with Vodafone UK the phone does not seem to pick this automatically
  • Set things like ring tones, screen locks
  • And I am sure there are things I will notice I missed over the next few days…..

So this took about 30 minutes to get my phone back to something like my settings. Not a great owner experience, but we repave our PCs regularly to get ride of the accumulated rubbish, so why not our phones?

When you forget to save a word document

We have all done it, opened Word typed all morning, not bothering to save the file as we go along and then for some mad reason exited Word say you did not want to save. So you loose the mornings work.

Now we know that Word does an auto save, but if you are stupid enough to say yes on exit without saving how do you get the auto backup file? Does Word even keep a backup if you never saved the file for the first time?

This is just the problem I had recently.

It used to be that to get back an auto recovery file you were hunting around in the

C:\Users\[user]\AppData\Roaming\Microsoft\Word\ auto recover

folder (or wherever it was set in the Word options). Hopefully Word would do this for you, but remember Word will not look for these files if it exited without error. It only tries to recover files if it crashed.

What I did not know was that there was a way to hunt for these files via the menus in Word 2010.

  1. In Word click the "File" menu, and select the option for "Recent."

  2. Click the option for "Recover Unsaved Documents."

  3. image

  4. You should get the following dialog and your file(s) should be listed


Isn’t it amazing how many features there are in products you use every day you don’t know about. This one saved me a good few hours this week!

My experiences moving to BlogEngine.NET


I have recently moved this blog server from using Community Server 2007 (CS2007) to BlogEngine.NET.

We started blogging in 2004 using .Text, moving through the free early versions of Community Server then purchased Community Server Small Business edition in 2007. This cost a few hundred pounds. We recently decided that we had to bring this service up to date, if for no other reason, to patch the underling ASP.NET system up to date. We checked how much it would cost to bring Community Server to the current version and were shocked by the cost, many thousands of dollars. Telligent, the developers, have moved to only servicing enterprise customers, they have no small business offering. So we needed to find a new platform.

Being a SharePoint house, we consider SharePoint as the blog host. However, we have always had the policy to have systems that have external content creation i.e. you can post a comment, not be on our primary business servers. As we did not want to install a dedicated SharePoint farm for just the blogs we decided to use another platform, remembering we needed on that could support multiple blogs that we could aggregate to provide a BM-Bloggers shared service.

We looked at what appears to be the market leader Wordpress, but to host this we needed a MySql Db, which we did not want to install, we don’t need another DB technology on our LAN to support. So we settled on BlogEngine.NET, the open source .NET4 blogging platform that can use many different storage technologies, we chose SQL2008 to use our existing SQL server investment.


So we did a default install of BlogEngine.NET. We did it manually as I knew we were going to use a custom build of the code, but we could have used the Web Platform Installer

We customised a blog as a template and the used this to create all the child blogs we needed. If we were not bring over old content we would have been finished here. It really would have been quick and simple.

Content Migration

To migrate our data we used BlogML. This allowed us to export CS2007 content as XML files which we then imported to BlogEngine.NET.

BlogEngine.NET provides support for BlogML our the box, but we had install a plug-in for CS2007

This was all fairly straight forward, we exported each blog and imported it to the new platform, but as you would expect we did find a few issues

Fixing Image Path (Do this prior to import)

The image within blog posts are hard coded as URLs in the export file. If you copied over the image files (that are stored on the blog platform) from the old platform to the new server, on matching urls, then there should be no problems.

However, I decided I wanted images in the location they are meant to be in i.e the [blog]\files folder using BlogEngine.NETs image.axd file to  load them. It was easiest to fix these in the BlogML XML file prior to importing it. The basic edited was to change




I did these edits with simple find and replace in a text editor, but you could use regular expressions.

Remember also the images need to be copied from the old server (…\blogs\rfennell\image_file.png)  to a the new server ( …\App_Data\blogs\rfennell\files\image_file.png)

We also had posts written with older versions of LiveWriter. This placed images in a folder structure (e.g.  ..\blogs\rfennell\livewriter\postsname\image_file.png). We also need to move these to the new platform and fix the paths appropriately.

Post Ownership

All the imported posts were shown to have an owner ID not the authors name e.g. 2103 as opposed to Richard.The simplest fix for this was a SQL update after import e.g.

update [BlogEngine].[dbo].[be_Posts] set [Author] = 'Richard' where [Author]='2103'

The name set should match the name of a user account created on the blog

Comment Ownership

Due to the issues over spam we had forced all users to register on CS2007 to post a comment. These external accounts were not pulled over in the export. However, BlogEngine.NET did not seem that bothered by this.

However no icons for these users was show.


These icons should be rendered using the websnapr.com as a image of the commenter's homepage, but this was failing. This it turned our due to their recent a API changes, you now need to pass a key. As an immediate solution to this I just removed the code that calls websnapr so the default noavatar.jpg image is shown. I intend to look at this when the next release of BlogEngine.NET appears as I am sure this will have a solution to the websnapr API change.

There was also a problem many of the comment author hyper links they all seemed to be http://. To fix the worst of this I ran a SQL query.

update be_PostComment set author = 'Anon' where Author = 'http://'

I am sure I could have done a better job with a bit more SQL, but our blog has few comments so I felt I could get away with this basic fix


CS2007 displays tag clouds that are based on categories. BlogEngine.Net does the more obvious and uses categories as categories and tags as tags.

To allow the BlogEngine.NET  to show tag clouds the following SQL can be used to duplicate categories to tags

insert into be_PostTag (BlogID,PostID, Tag)
select be_PostCategory.BlogID, postID, categoryname from be_PostCategory, be_Categories where be_PostCategory.CategoryID = be_Categories.CategoryID and be_PostCategory.BlogID ='[a guid from be_blogs table]'

A workaround for what could not be exported

Were we had a major problem was the posts that were made to the original .Text site that was upgraded to Community Server, these were posts from 2004 to 2007.

Unlike all the other blogs these posts would not export via the CS BlogML exporter. We just got a zero byte XML file. I suspect the issue was some flag/property was missing on these posts so the CS2007 internal API was having problems, throwing an internal exception and stopping.

To get around this I had to use the BlogML SDK and some raw SQL queries into CS2007 database. There was a good bit of trial and error here, but by looking at the source of BlogML CS2007 exporter and swapping API calls for my best guess at the SQL I got the posts and comments. It was a bit rough, but am I really that worried over 5 year old plus posts?

Blog Relationships

Parent/Child Relationship

When a child blog is created an existing blog is copied as a template. This includes all its page, posts and users. For this reason it is a really good idea to keep a ‘clean’ template that as as many of the setting correct as possible. So when a new child blog is create you basically only have to create new user accounts and set its name/template

Remember no user accounts are shared between blogs, so the admin on the parent is not the admin on the child, each blog has its own users.

Content Aggregation

A major problem for Black Marble was the lack of aggregation of child blogs. At present BlogEngine.NET allows child blogs, but no built in way to roll up the content to the parent. This is a feature that I understand the developers plan to add in a future release.

To get around this problem, I looked to see if it was easy to modify the FillPosts methods to return all post irrespective of the blog. This would, I my opinion,  have taken too much hacking/editing due to the reliance on the current context to refer to the current blog, so I decided on a more simplistic fix

  1. I create a custom template for the parent site that removes all the page/post lists and menu options
  2. Replaced the link to the existing syndication.axd with a hand crafted syndication.ashx
  3. Added the Rssdotnet.com open source project to the solution and used this to aggregate the Rss feeds of each child blog in the syndication.ashx page

This solution will be reviewed on each new release of BlogEngine.Net in case it is no longer required.


So how was the process? not as bad as I expected, frankly other than our pre-2007 content it all moved without any major issues.

It is a good feeling to now be on platform we can modify as we need, but has the backing of an active community.