But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

TF260073 incompatible architecture error when trying to deploy an environment in Lab Manager

I got a TF260073, incompatible architecture error when trying to deploy a new virtual lab environment using a newly created VM and template. I found the fix in a forum post.

The issue was that when I had build the VMs, I had installed the Lab Management agents using a VMprep DVD ISO and mounted it using ‘share image instead of copying it’ option. This as the name implies means the ISO is mount from a share not copied to the server running the VM, this save time and disk resources. When I had stored my VM into the SCVMM Library I had left this option selected i.e the VMPrep.iso mounted. All I had to do to fix this issue was open the settings of the VM stored in the SCVMM Library and dismount the ISO, as shown below

image

Interestingly the other VM I was using in my environment was stored as template and did not suffer this problem. When creating the template I was warning that it could not be created if an ISO was mounted in this manner. So the fact I had a problem with my VM image should not have been a surprise.

Getting a ‘File Download’ dialog when trying to view TFS build report in Eclipse 3.7 with TEE

When using TEE in Eclipse 3.7 on Ubuntu 11.10 there is a problem trying to view a TFS build report. If you click on the report in the Build Explorer you would expect a new tab to open and the report be shown. This is what you see in Eclipse on Windows and on older versions of Eclipse on Linux. However on Ubuntu 11.10 with Eclipse 3.7 you get a File Download dialog.

image

I understand from Microsoft this is a known issue, thanks again to the team for helping get to the bottom of this.

The problem is due to how Eclipse manages its internal web browser. Until version 3.7 it used the Mozilla stack (which is still the stack used internally by TEE for all its calls), but with Eclipse 3.7 on Linux it now uses WebKit as the stack to open request URL such as the build report. For some reason this is causing the dialog to be show.

There are two workaround:

Set Eclipse to use an external browser

In Eclipse –> Windows –> Preference, select use external browser

image

 

When you now click on the build details an external browser is launched showing the results you would expect.

image

 

Switch Eclipse back to using Mozilla as its default

You can switch Eclipse back to using mozilla as its default. In your eclipse.ini set

-Dorg.eclipse.swt.browser.DefaultType=mozilla

Once this is done Eclipse should behave as expected, opening a tab to show the build report within Eclipse.

 

image

Problems finding XULRunner when running TEE11 CTP1 on Ubuntu and connecting to TFS Azure – a solution

I recently got round to taking a look at Team Explorer Everywhere 11 CTP1. This is the version of TEE that allows you to access the Azure hosted preview of the next version of TFS using Eclipse as a client. I decided to start with  a clean OS so

  1. Downloaded the Ubuntu 32bit ISO
  2. Used this ISO to create a test VM on my copy of VirtualBox (currently using VirtualBox as this allows me to create 64bit and 32bit guest VMs on my Windows 7 laptop without have to reboot  to my dual boot Windows 2008 partition to access Hyper-V)
  3. Selected default installation options for Ubuntu
  4. When completed used the Ubuntu Software Centre tool to install Eclipse 3.7
  5. Downloaded the Team Explorer Everywhere 11 CTP1 and installed the Eclipse plug as detailed on the download page.
  6. Once installed I then tried to connect to our in house TFS2010 server from with Eclipse – it all worked fine

I next tried to connect to my project collection on https://tfspreview.com and this is where I hit a problem….

Instead of getting the expected LiveID login screen I got an error dialog saying ‘No more handles [Could not detect registered XULRunner to use]

clip_image002

A quick search showed this is a known issue, basically Ubuntu has stopped distributing XULRunner. It needs to be installed manually as detailed in the post. Problem was, unlike in the post, when I followed this process it had no effect on the problem, so time for more digging with the excellent assistance of Shaw from the TEE team at  Microsoft.

The first suspect was that an environment variable MOZILLA_FIVE_HOME, which, according  to the SWT FAQ, needed to be set to let Eclipse know where to find XULRunner. Checking the Eclipse Help->Team Explorer Support… dialog

clip_image002[5]

seemed to show the correct setting had been picked up automatically. So as expected, on setting the environment variable it had no effect on the problem. So just to make sure I set the variable in eclipse.ini file using the setting

-Dorg.eclipse.swt.browser.XULRunnerPath=/usr/lib/xulrunner-1.9.2.24

This changed the error message, and gave the hint to real problem.

clip_image002[7]

XULRunner was failing to load as other dependencies were missing.

At this point I could have started to chase down all these dependencies. However, I realised the issue was the Ubuntu distribution of Eclipse, it just had too many bits missing that you need to login to TFS Azure. So I removed the Ubuntu sourced Eclipse installation and downloaded the current version of  Eclipse direct for the Eclipse home site.

  1. I unzipped this distribution
  2. Installed TEE CTP1 as before
  3. Check I could access our TFS 2010
  4. And checked I could login via http://tfspreview.com

image

So success, the tip being using the official Eclipse distribution, as you never know what another distribution might have removed.

Moving Environments between TPCs when using TFS Lab Management

Background

One area I think can be found confusing in TFS Lab Management is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.

image

 

This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.

This problem effects companies like ourselves as we have many TPCs because we tend to have one per client, an arrangement not that uncommon for consultancies.

It is not just in Lab Management this is an issue for us. The isolated nature of TPCs, a great advantage for client security, has caused us to have an ever growing number of Build Controllers and Test Controllers which were are regularly reassigning to whichever are our active TPCs. Luckily multiple the Build Controller can be run on the same VM (I discussed this unsupported hack here), but unfortunate there is no similar workaround for Test Controllers.

MTM is not your friend when  storing environments for use beyond the current TP

What I want to discuss in this post is how, when you have a working environment in one TP you can get it into another TP with as little fuss as possible.

Naively you would think that you use the Store in Library option within MTM that is available for a stopped environment.

 image

This does store the environment on the SCVMM Library, but it is only available for the TP that is was stored from. It is stored in the A1 silo in the SCVMM Library. Now you might ask why, the SCVMM Library is just a share, so anything in it should be available to all? But it turns out it is not just a share. It is true the files are on a UNC share, you can see the stored environments as a number of Lab_[guid] folders, but there is also a DB that stores meta data, this is the problem. This meta data associates the stored environment with a given TP.

The same is true if you choose to just store a single VM from within MTM whether you choose to store it as a VM or a template.

Why is this important you might ask? Well it is all well and good you can build your environment from VMs and templates in the SCVMM Library, but these will not be fully configured for your needs. You will build the environment, making sure TFS agents are in place, maybe putting extra applications, tools or test data on system. It is all work you don’t want to have to repeat for what is in effect the same environment in another TP or TPC. This is a problem we see all the time. We do SharePoint development so want a standard environment (couple of load balanced servers and a client) we can use for many client projects in different TPCs  (Ok VM factory can help, but this is not my point here).

A workaround of sorts

The only way I have found to ease this problem is when I have a fully configured environment to clone the key VMs (the servers) into the SCVMM Library using SCVMM NOT MTM

  1. Using MTM stop the environment you wish to work with.
  2. Identify the VM you wish to store, you need its Lab name. This can be found in MTM if you connect to the lab and check the system info for the VM

    image
  3. Load SCVMM admin console, select Virtual Machines tab and find the correct VM

    image
  4. Right click on the VM and select Clone
  5. Give the VM as new meaningful name e.g. ‘Fully configured SP2010 Server’
  6. Accept the hardware configuration (unless you wish to change it for some reason)
  7. IMPORTANT On the destination tab select the to ‘store the virtual machine in the library’. This appears to be the only means to get a VM into the library such that it can be imported into any TPC/TP.

    image
  8. Next select the library share to use
  9. And let the wizard complete.
  10. You should now have a VM in the SCVMM Library that can be imported into new environments.

 

You do have to at this point recreate the environment in your new TP, but at least the servers you import into this environment are configured OK. If for example you have a pair of SP2010 servers, a DC and a NLB, as long as you drop them into a new isolated environment they should just leap into life as they did before. You should not have to do any extra re-configuration.

The same technique could be used for workstation VMs, but it might be as quick to just use template (sysprep’d) clients. You just need to take a view on this for your environment requirements

Debugging CodedUi Tests when launching test as a different user

If you are working with CodedUI tests in Visual Studio you sometimes get unexpected results, such as the wrong field be selected in replays. When trying to work out what has happened the logging features are really useful. These are probably already switched on, but you can check by following the details in this post.

Assuming you make no logging level changes from the default, if you look in the

      %Temp%\UITestLogs\LastRun

you should see a log file containing warning level messages in the form

Playback - {1} [SUCCESS] SendKeys "^{HOME}" - "[MSAA, VisibleOnly]ControlType='Edit'"

E, 11576, 113, 2011/12/20, 09:55:00.344, 717559875878, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window

W, 11576, 113, 2011/12/20, 09:55:00.439, 717560081047, QTAgent32.exe, Playback - {2} [SUCCESS] SendKeys "^+{END}" - "[MSAA, VisibleOnly]ControlType='Edit'"

E, 11576, 113, 2011/12/20, 09:55:00.440, 717560081487, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window

W, 11576, 113, 2011/12/20, 09:55:00.485, 717560179336, QTAgent32.exe, Playback - {3} [SUCCESS] SendKeys "{DELETE}" - "[MSAA, VisibleOnly]ControlType='Edit'"

A common problem with coded UI tests can be who you are running the test as. It is possible to launch the application user test using the following command at the start of a test

     ApplicationUnderTest.Launch(“c:\my.exe”, 
                              “c:\my.exe”, 
                             ””, 
                             ”username”, 
                            securepassword, 
                           ”domain”)

 

I have found that this launch mechanism it can cause problems with fields not being found in the CodeUI test unless you run Visual Studio as administrator (using the right click run as Administrator in Windows). This is down to who is allowed to access who's UI thread in Windows if a user is not an administrator.

So if you want to use ApplicationUnderTest.Launch and change the user for an CodedUI test, best the process running the test is an administrator.

When you try to run a test in MTM you get a dialog ‘Object reference not set to an instance of an object’

When trying to run a newly created manual test in MTM I got the error dialog

‘You cannot run the selected tests, Object reference not set to an instance of an object’.

image

 

On checking the windows event log I saw

Detailed Message: TF30065: An unhandled exception occurred.

Web Request Details Url: http://……/TestManagement/v1.0/TestResultsEx.asmx

So not really that much help in diagnosing the problem!

Turns out the problem was I had been editing the test case work item type. Though it had saved/imported without any errors (it is validated during these processes) something was wrong with it. I suspect to do with filtering the list of users in the ‘assigned to’ field as this is what I last remember editing, but I might be wrong, it was on a demo TFS instance I have not used for a while.

The solution was to revert the test case work item type back to a known good version and recreate the failing test(s). Its seems once a test was created from the bad template there was nothing you could do about fixing it.

Once this was done MTM ran the tests without any issues.

When I have some time I will do an XML compare of the exported good and bad work item types to see what the problem really was.