But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Did I delete the right lab?

It was bound to happen in the end, the wrong environment got deleted on our TFS Lab Management instance. The usual selection of rushing, minor mistakes, misunderstandings and not reading the final dialog properly and BANG you get that sinking feeling as you see the wrong set of VMs being deleted. Well this happened yesterday, so was there anything that can be done? Luckily the answer is yes, if you are quick.

Firstly we knew SCVMM operations are slow, so I RDP’d onto the Hyper-V host  and quickly copied the folders that contained the VMs scheduled to be deleted. We now had a copy of the VHDs.

On the SCVMM host I cancelled the delete jobs. Turns out this did not really help as the jobs just get rescheduled. In fact it may make matters worse as the failing of jobs and their restarting seems to confuse SCVMM, took it hours before it was happy again, kept giving ‘can’t run job as XXX in use’ and losing sight of the Hyper-V hosts (needed to restart the VMM service in the end).

So I now had a copy of three network isolated VM, so I

  • Created new VMs on a Hyper-V host using Hyper-V manager with the saved VHDs as their disks. I then made sure they ran and were not corrupted
  • In SCVMM cleared down the saved state so they were stopped (I forgot to do this the first time I went through this process and it meant I could not deploy the stored VMs into an isolated environment, that wasted hours!)
  • In SCVMM put them into the library on a path our Lab Management server knows about (gotcha here is SCVMM deletes the VM after putting it into the library, this is unlike MTM Lab Center which leaves the original in place, always scares me when I forget)
  • In MTM Lab Center import the new VMs from the library
  • Create a new network isolated environment with the VMs
  • Wait……………………….

When it eventually started I had a network isolated environment back to the state it was when we in effect pulled the power out. All took about 24 hours, but most of this was waiting for copies to and from the library to complete.

So the top tip is try to avoid the problem, this is down to process frankly

  • Use the ‘mark a in use’ feature to say who is using a VM
  • Put a process in place to manage the lab resources. It does not matter how much Hyper-V resource you have you will run out in the end and be unable to add that extra VM. You need a way to delete/archive out what is not currently need
  • Read the confirmation dialogs, they are there for a reason

Why does ‘Send to > email link’ in SharePoint open Chrome on my PC?

I must have clicked something in error on my Win 8 PC as when I open one of our SharePoint 2010 sites and select a file, right clicked and selected Send To > Email Link instead of an Outlook email opening my PC tries to open Chrome.

A bit of quick digging showed the issue was that the file association for mailto: was wrong. You can check this setting in IE > Internet Options > Programs > Internet Programs > set Programs (button)

image

Once I changed this to Outlook I got the behaviour I expected

TF900546, can’t run Windows 8 App Store unit tests in a TFS build

Today has been one of purging build system problems. On my TFS 2012 Windows 8 build box I was was getting the following error when trying to run Windows 8 App Store unit tests

TF900546: An unexpected error occurred while running the RunTests activity: 'Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.'.

On further investigation, I am not really sure anything was working too well on this box. To give a bit of background

  • I have one build controller build2012
  • with a number of build agents spread across various VMs. I use tags to target the correct agent e.g. SUR40 or WIN8

In the case of Windows 8 builds (where the  TFS build agent has to run on a Windows 8 box) the build seemed to run, but tests failed with the TF900546 ‘its broken error, but I am not saying why’ error. As usual there was nothing in the logs to help.

To try to debug the error I added a build controller to this box, and eventually, just like Martin in his post noticed, after far too long, that I was getting a error on the build service on the Windows 8 box and the agent was not fully online.

image

The main symptom is the build agent says ready, but shows a red box (stopped). If you hit the details link that appears you get the error dialog. Martin had a 500 error, I was getting a 404. I had seen similar problems before, I really should read (or at least remember) my own blog posts.

I can’t stress enough, if you don’t see a green icon on build controllers and agent you have a problem, it might not be obvious at that point but it will bite you later!

For me the fix was the URL I was using to connect to the TFS server. i was using HTTPS (SSL), as soon as switched to HTTP all was OK. In this case this was fine as both the TFS server and build box were in the same rack so SSL was not really needed. I suspect that the solution, if I had wanted SSL, would be as Martin outlined, a config file edit to sort out the bindings.

But remember….

That having a working build system is not enough for Windows 8 App Store unit tests. You also have to manually install the application certificate for test assembly as detailed in MSDN as well as getting the build service running in interactive mode.

Once this was done my application build and the tests ran OK

More thoughts on addressing TF900546 ‘Unable to load one or more of the requested types’ on TFS2012

A while ago I posted about seeing the TF900546 error when running unit tests in a previously working TFS 2012 build. The full error being:

TF900546: An unexpected error occurred while running the RunTests activity: 'Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.'.

Well late last week this problem came back with avengeance on a number of builds run on the same build controller/agent(s). Irritatingly I first noticed it after a major refactor of a codebase, so I had plenty of potential root causes as assemblies had been renamed and it was possible they might not be found. However, after a bit of testing there were no obvious candidates as all tests worked fine locally on my development PC, and a new very simple test application showed the same issues. It was defiantly an issue on the build system.

I can still find no good way to debug this error, Stackoverflow mention Fuslogvw and WinDbg, as well as various copy local settings and the like. Again all seems too much as this build was working in the past, just seemed to stop. I tried a couple but got no real information, and the error logs were empty.

In the end I just tried what I did before (as I could think of no better tactic to pin down the true issue). I went into the build controller config, removed the reference to the custom assemblies, saved this settings (causing a controller restart), then put it back (another restart of the controller)

image

After this my test started working again, with no other changes

Interesting a restart of VM running the build controller did not fix the problem. However this does somewhat chime with comments in the StackOverFlow thread that causing the AppPool in MVC apps to rebuild completely, ignoring any cached assemblies, seems to fix the issue.

Change in the System.AssignedTo in TFS SOAP alerts with TFS 2012

Ewald’s post explains how to create a WCF web service to act as an end point for TFS Alerts. I have been using the model with a TFS 2010 to check for work item changed events, using the work item’s System.AssignedTo field to retrieve the owner of the work item (via the TFS API) so I can send an email, as well as other tasks (I know I could just send the email with a standard alert).

In TFS 2010 this worked fine, if the work item was assigned to me I got back the name richard, which I could use as the to address for the email by appending our domain name.

When I moved this WCF event receiver onto a TFS 2012 (using the TFS 2012 API) I had not expected any problems, but the emails did not arrive. On checking my logging I saw they were being sent to fennell@blackmarble.co.uk. Turns out the issue was that the API call

value = this.workItem.Fields[“System.AssignedTo ”].Value.ToString();

was returning the display name ‘Richard Fennell’, which was not a valid part of the email address.

The best solution I found, thus far, was to check to see if had a display name in the AD using the method I found on stackoverflow. If I got a user name back I used that, if I got a empty string (because I have been passed a non display name) I just use the initial value assuming it is a valid address.

Seems to work but this there a easier solution?

Cannot run coded ui test on a TFS lab due to lack of rights to the drops folder

Whilst setting up a TFS 2012 Lab management environment for Coded UI testing we got the problem that none of the tests were running, in fact we could see no tests were even being passed to the agent in the lab

image

On the build report I clicked on  the ‘View Test Results’ which loaded the result in Microsoft Test Manager (MTM)

image

and viewed the test run log, and we saw

image

The issue, it claimed, was that the build controller did not have rights to access the drop folder containing the assembly with the CodedUI. tests.

Initially i thought the issue was the test controller was running a ‘local service’, so I changed it to the domain\tfsbuild account (which obviously has rights to the drops folder as it put the files there) but still got the same error. i was confused.

So I checked the event log on the build controller and found the following

image

The problem was my tfslab account, not the local service or tfsbuild one. So the message shown in the build report was just confusing, mentioning the wrong user. The lab account is the one configured in the test controller (yes you have to asked how had I missed that when I had been into the same tools to change the user the test controller ran as!)

image

As soon as I granted this tfslab user rights to the drops folder all was OK

Using IISExpress for addresses other than localhost

I recently had the need to use IISExpress on Windows 8 to provide a demo server to a number of Surface RT clients. I found this took me longer than I expected. It might be me but the documentation did not leap out.

So as a summary this is what I had to do, let us say for example that I want to serve out http://mypc:1234

  • Make sure you have a project  MyWebProject in Visual Studio that works for http://localhost:1234 using IISExpress
  • Open the TCP port 1234 on the PC in the Control Panel > Admin Tools > Firewall
  • Edit C:\Users\[current user]\Documents\IISExpress\config\applicationhost.config and find the site name section for your Visual Studio project. And change the <binding protocol="http" bindingInformation="*:1234:localhost" /> to <binding protocol="http" bindingInformation="*:1234:*" />.  This means IISexpress can now listen on this port for any IP address
  • You finally need to run IISExpress with administrative privileges. I did this by opening a PowerShell prompt with administrative privileges and running the command  C:\Program Files\IIS Express\iisexpress.exe /site:MyWebProject

Once all this was done my client PCs could connect

403 and 413 errors when publishing to a local Nuget Server

We have an internal Nuget Server we use to manage our software packages. As part of our upgrade to TFS2012 this needed to be moved to a new server VM and I took the chance to upgrade it from 1.7 to 2.1.

The problem

Now we had had a problem that we could publish to the server via a file copy to its underlying Packages folder (a UNC share) but could never publish using the Nuget command e.g.

Nuget push mypackage.nupkg -s http://mynugetserver

I had never had the time to get around to sorting this out until now.

The reported error if I used the URL above was

Failed to process request. 'Access denied for package 'Mypackage.'.
The remote server returned an error: (403) Forbidden..

If I changed the URL to

Nuget push mypackage.nupkg -s http://mynugetserver/nuget

The error became

Failed to process request. 'Request Entity Too Large'.
The remote server returned an error: (413) Request Entity Too Large..

Important: This second error was a red herring, you don't need the /nuget on the end of the URL

The solution

The solution was actually simple, and in the documentation though it took me a while to find.

I had not specificed an APIKey in the web.config on my server, obvious really my access was blocked as I did not have the shared key. The 413 errors just caused me to waste loads of time looking at WCF packet sizes because I had convinced myself I needed to use the same URL as you enter in Visual Studio > Tools > Option > Package Management > Add Source, which you don't

Once I had edited my web.config file to add the key (or I could have switched off the requirement as an alternative solution)

  <appSettings>
    <!--
            Determines if an Api Key is required to push\delete packages from the server.
    -->
    <add key="requireApiKey" value="true" />
    <!--
            Set the value here to allow people to push/delete packages from the server.
            NOTE: This is a shared key (password) for all users.
    -->
    <add key="apiKey" value="myapikey" />
    <!--
            Change the path to the packages folder. Default is ~/Packages.
            This can be a virtual or physical path.
        -->
    <add key="packagesPath" value="" />
  </appSettings>

I could then publish using

Nuget mypackage.nupkg myapikey -s http://mynugetserver

Problems re-publishing an Access site to SharePoint 2010

After applying SP1 to a SharePoint 2010 farm we found we were unable to run any macros in an Access Services site, it gave a –4002 error. We had seen this error in the past, but the solutions that worked then did not help. As this site was critical, as a workaround, we moved the site to a non-patched SP2010 instance. This was done via a quick site collection backup and restore process.  This allowed us to dig into the problem at our leisure.

Eventually we fixed the problem by deleting and recreating the Access Services application within SharePoint on the patched farm. We assume some property was changed/corrupted/deleted in the application of the service pack.

So we now had a working patched farm, but also a duplicate of Access Services site with changed data. We could not just backup and restore this as other sites in the collection had also changed. Turns out getting this data back onto the production farm took a bit of work, more than we expected. This is the process we used

  1. Open the Access Services site in a browser on the duplicate server
  2. Select the open in Access option, we used Access 2010, which it had originally been created in
  3. When Access had opened the site, use the ‘save as’ option to save a local copy of the DB. We now had a disconnected local copy on a PC. We thought we could just re-publish this, how wrong we were.
  4. We ran the web compatibility checker expecting no errors, but it reported a couple of them. In one form and one query extra column references had been added that referenced the auto created SharePoint library columns (date and id stamps basically) These had to be deleted by hand.
  5. We then could publish back to the production server
  6. We watched as the structure and data was publish
  7. Then it errored. On checking the log we saw that it claimed a lookup reference had invalid data (though we could not see offending rows and it was working OK). Luckily the table in question contained temporary data we could just delete, so we tried to publish again
  8. Then it errored .On checking the logs again we saw it reported it could not copy to http://127.0.0.1 – No idea why it looking for localhost! Interestingly if we tried to publish back to another site URL on the non-patched server it work! Very strange
  9. On a whim I repeated this whole process but using Access 2013 RC, and strangely it worked

So I now had my Access Services site re-published and fully working on a patched farm. That was all a bit too complex for my tastes

Moving to an SSD on Lenovo W520

[Also see http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx]

I have just reinstalled Windows 8 (again) on my Lenovo W520. This time it was because I moved to a Crucial m4 256Gb 2.5” internal SSD as my primary disk. There is a special slot for this type of drive under the keyboard, so I could also keep my 750Gb Hybrid SATA drive to be used for storing virtual machines.

I had initially planned to backup/restore my previous installation using IMAGEX as I had all I needed in my PC, but after two days of fiddling I had got nowhere, the problems being

  • The IMAGEX from my hybrid drive to an external disk (only 150Gb of data after I had moved out all my virtual PCs) took well over 10 hours. I thought this was due to using an old USB1 (maybe it was a USB2 at a push) disk caddy, but it was just as slow with a ESata. The restore from the same hardware only too an hour or so. One suggest made that I did not try was to enable compression in the image, as this would reduce the bandwidth on the disk connection, it is not as if my i& CPU could not handle the compression load.
  • When the images restored, we had to fiddle with Bcdedit to get the PC to boot
  • Eventually the Windows 8 SSD based image came up, you could open the login page with no issues but got no cursor for a long time, it it was sloooow to do anything – I have no idea why.

So in the end I gave up, and installed anew, including Visual Studio and Office it took about 30-45 minutes. There were still a couple of gotcha’s though

  1. I still had to enable the Nvidia Optumus graphics mode in BIOS, thus enabling both the Intel and Nvidia graphics sub systems. I usually only run on the discrete NVidia one as this does not get confused by projects. if you don’t enable the Intel based one then the Windows 8 install hangs after installing drivers and before the reboot that then allows you to login and choose colours etc. As soon as this stage is passed you can switch back to discrete graphics as you wish. I assume the Windows 8 media is missing some NVidia bits that it find after this first reboot or via WIndows Update.
  2. Windows 8 is still missing a couple of drivers for the Ricoh card reader and Power management, but these are both released on the http://support.lenovo.com/en_US/ site. You do have to download these manually and install them. All the other Lenovo drivers (including updated audio I have mentioned before) come down from Windows update.

So the moral of the story is reinstall, don’t try to move disk images. Make sure your data is in SkyDrive, Dropbox, SharePoint, source control etc. so it is just applications you are missing which are quick to sort out. The only pain of a job I had was to sort out my podcasts, but even that was not too bad