But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

When trying to load Office document from SharePoint I keep ending up in the Office Web Application

Whenever I tried to load an Office 2013 document from our SharePoint 2010 instance I kept ending up in the Office Web Application, the Office application was not being launched.

If I tried the use the ‘Open in Open in Word’ context menu I got the following error (and before you ask yes I was in IE, IE11 in fact, and Office 2013 was installed)

image

My PC has been build of our standard System Center managed image, others using the same best image seemed OK, so what had gone wrong for me?

The launching of Office application features is managed by the ‘SharePoint OpenDocument Class’ IE add in (IE > Settings > Manage Add-ins). On my PC this whole add-in was missing, don’t know why.

image

The fix it turns out was to got into Control Panel > Add remove Programs > Office 2013 > Change and do a repair and a reboot. Once this was done Office launched as expected.

Fix for ‘An unexpected error occurred. Close the windows and try again’ error adding Azure subscription to Visual Studio Release Management Tools

In preparation for my Techdays session next month, I have been sorting demos using the various Release Management clients.

When I tried to create a release from within Visual Studio using the ‘Release Management tools for Visual Studio I found I could not add my Azure subscriptions. I saw the error ‘An unexpected error occurred. Close the windows and try again’

image

I could download and import the subscription file, it showed the available storage accounts, but when I pressed save I got the rather unhelpful error ‘Object reference not set to an instance of an object’

image

Turns out the issue was a simple one, rights. The LiveID I had signed into Visual Studio as had no rights for Release Management on the VSO account running the Release Management service, even though it was a TPC administrator.

It is easier to understand the problem in the Release Management client. When I tried to set the Release Management Server Url (RM > Administration > Settings) to the required VSO Url as the LiveID I was using in Visual Studio I got the nice clear error shown below.

image

The solution was in the Release Management client to use the LiveID of the VSO account owner. I could then connect the Url in the Release Management client and then add my previously failing LiveID as a user for the release service.

image

Once this was done I was able to use this original LiveID in Visual Studio without a problem.

How to edit registered Release Management deployment agent IP addresses if a VMs IP address changes

I have posted in the past that we have a number of agent based deployments using Release Management 2013.4 that point to network isolated Lab Management environments. Over Christmas we did some maintenance on our underlying Hyper-V servers, so everything got fully stopped and restarted. When the network isolated environment were restarted their DHCP assigned IP addresses on our company domain all changed (maybe we should have had longer DHCP lease times set?)

image

Worst of all some were reused and were actually swapped between environments, so an IP address that used to connect to a Server1 in environment Lab1 could be assigned to Server2 in environment Lab2. So basically all our deployments failed, usually because there server could not connect to the agent, but sometimes because the wrong VM responded.

Now for general Lab Management operations this was not an issue; inside the environment nothing had changed, the network range was still 192.168.23.x and externally SCVMM, MTM and the Test Controllers all knew what is going on and sorted themselves out. The problem was the Release Management deployment agent’s registration with the Release Management server. As I detailed in my previous post you have manually register the agents using shadow accounts. This means they are registered with their IP address at the time of registration, it does not change if the VMs IP address is reassigned with DHCP. It is up to you to fix it.

But how?

And that is the problem, there is no way to edit the IP addresses of the registered server’s deployment agents inside the Release Management admin tool. The only option I could find would be deleted the registered server and re-add them, but this requires them to be removed from any release pipelines. Something I did not want to do, to much work when I just wanted to fix an IP address.

The solution I found was to edit the IPAddress column in the underlying Server table in the ReleaseManagement DB. I did this with SQL Management Studio, nothing special. The only thing to note is that you cannot have duplicate IP addresses, so they had to be edited in an order to avoid duplication, using a temporary IP address during the edit process as I shuffled addresses around.

image

Once this was done everything leapt into life. I did not even need to restart the Release Management Server, just press the refresh button on the Server tab and saw all the agents had reconnected.

image

So a good dirty fix, but something I would have hoped would have been easier if the tools provided a means to edit the IP addresses

Note: This problem is specific to agent based deployment in Release Management. If you are using vNext DSC based deployment to network isolated VMs are registered using their DNS names on the corporate LAN e.g. VSLM-1344-e7858e28-77cf-4163-b6ba-1df2e91bfcab.lab.blackmarble.co.uk so the problem does not occur

Failing to unblock downloaded ZIP files causes really strange errors

Twice recently I have hit the problem that I needed to a unblock ZIP files downloaded from a VSO source repository before I extracted the contents. One was a DSC modules, the other a PowerShell script with associated .NET assemblies.

In both cases the error messages I got were confusing and misleading. In the case of the DSC module the error was "cannot be loaded because you opted not to run this software now". The other project just suffered mixed .NET assembly loading errors.

So really try to remember after a download to right click into the properties and make sure the ZIP file does not need unblocking

image

If it does, click the unblock button prior to extracting the ZIP, else you will see similar strange errors to the ones I have seen

vNext Release Management and Network Isolation

If you are trying to use Release Management or any deployment tool with a network isolated Lab Management setup you will have authentication issues. Your isolated domain is not part of your production domain, so you have to provide credentials. In the past this meant Shadow Accounts or the simple expedient of  running a NET USE at the start of your deployment script to provide a login to the drops location.

In Release Management 2013.4 we get a new option to address this issue if you are using DSC based deployment. This is Deploy from a build drop using a shared UNC path. In this model the Release Management server copies the contents of the drops folder to a known share and passes credentials to access it down to the DSC client (you set these as parameters on the server).

This is a I nice formalisation of the tricks we had to pull by hand in the past. And something I had missed when the Update 4 came out

Can’t build SSDT projects in a TFS build

Whilst building a new TFS build agent VM using our standard scripts I hit a problem that SSDT projects would not build, but was fine on our existing agents. The error was

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets (513): The "SqlBuildTask" task failed unexpectedly.
System.MethodAccessException: Attempt by method 'Microsoft.Data.Tools.Schema.Sql.Build.SqlTaskHost.OnCreateCustomSchemaData(System.String, System.Collections.Generic.Dictionary`2<System.String,System.String>)' to access method 'Microsoft.Data.Tools.Components.Diagnostics.SqlTracer.ShouldTrace(System.Diagnostics.TraceEventType)' failed.

The problem was fixed by doing an update via the Visual Studio > Tools > Extensions and Updates. Once this was completed the build was fine.

Seems there may have been an issue with the Update 3 generation SSDT tools, so older and newer versions seem OK. Our existing agents had already been patched.

Living with a DD-WRT virtual router – one month on

I posted a month or so ago about my ‘Experiences using a DD-WRT router with Hyper-V’, well I have been living with it over a month? How has it been going?

Like the curate’s egg ‘good in parts’. It seems OK for while and then everything would get a bit slow to stop.

Just as a reminder this is what I had ended up with

image

In essence, a pair of virtual switches, one internal using DHCP on the DD-WRT virtual router, and a second one connected to an active external network (usually Ethernet, as DHCP with virtual switches and WIFI in Hyper-V seem a very hit and miss affair).

From my Hyper-V VMs the virtual router seems to be fine, they all have a single network adaptor linked to the virtual switch that issue IP addresses via DHCP. The issues have been for the host operating system. I wanted to connect this to the internal virtual switch to allow easy access to my VMs (without the management complexity of punching holes in the router firewall), but when I did this I got inconsistent performance (made harder to diagnose due to moving house from a fast Virgin cable based Internet connection to a slow BT ADSL based link who’s performance profile varies greatly based on the hour of the day. I was never sure if it was problem with my router or BT’s service).

The main problem I saw was that it seemed the first time I accessed a site it was slow, but then was often OK. So a lookup issue, DNS?

Reaching back into my distant memory as network engineer (early 90s some IP but mostly IPX and NETBIOS) I suspected routing or DNS look up issue. Routing you can do something about via routing tables and metrics, but DNS is harder to control with multiple network connections.

The best option to manage DNS appeared to be changing the binding order for my various physical and virtual network adaptors so the virtual switches were the lowest priority.

image

This at least made most DNS requests go via physical devices.

Note: Also on my Virtual Network Switch adaptor on the host machine I told it not to use the DNS settings provided from the virtual router, but this seemed to have little effect as when using nslookup it still picked the virtual router, until I changed the binding order.

On the routing front, I set the manual metric on IP4 traffic via the virtual router adaptor to a large number, to make it the least likely route anywhere. Doing this should mean only traffic  to the internal 192.168.1.x network should use that adaptor

image

This meant my routing table on my host operating system looks as follows when the system is working OK

image

Outstanding Issues

Routing

I did see some problem if the route via the virtual switch appeared first in the list, this can happen when you change WIFI hotspot. The fix is to delete the unwanted route (0.0.0.0 to 192.168.1.1)

route delete 0.0.0.0 MASK 0.0.0.0 192.168.1.1

But most of the time fixed the binding order seemed enough, so I did not need to do this

External DHCP Refresh

If you swap networks, going from work to home, your external network will have a different IP address.  You do have to restart the router VM (or manually renew DHCP to get a new address)

DHCP and WIFI

There is still the problem getting DHCP working over Hyper-V virtual switched. You can do some tricks with bridging, but it is not great.

The solution I have used is to use Hyper-V checkpoint on my router VM. One set for DHCP and another with the static IP settings for my home network. Again not great but workable for me most of the time. I am happier editing the router VM rather than many guest VMs.

Why am I getting ‘cannot access outlook.ost’ issues with Office 365 Lync?

We use O365 to provide Lync messaging. So when I rebuilt my PC I thought I needed to re-install the client; so I logged into the O365 web site and selected the install option. Turns out this was a mistake. I had Office 2013 installed, so I already had the client, I just had not noticed.

If you do install O365 Lync client (as well as Office 2013 one) you get file access errors reported with your outlook.ost files. If this occurs, just un-install the O376 client and use the one in Office 2013, the errors go away

Errors running tests via TCM as part of a Release Management pipeline

Whilst getting integration tests running as part of a Release Management  pipeline within Lab Management I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)

The issue was two fold..

Permissions/Path Problems accessing the build drops location

The build drops location passed is passed into the component using the argument $(PackageLocation). This is pulled from the component properties, it is the TFS provided build drop with a \ appended on the end.

image 

Note that the \ in the text box is there as the textbox cannot be empty. It tells the component to uses the root of the drops location. This is the issue, as when you are in a network isolated environment and had to use NET USE to authenticate with a the TFS drops share the trailing \ causes a permissions error (might occur in other scenarios too I have not tested it).

Removing the slash or adding a . (period) after the \ fixes the path issue, so..

  • \\server\Drops\Services.Release\Services.Release_1.0.227.19779        -  works
  • \\server\Drops\Services.Release\Services.Release_1.0.227.19779\      - fails 
  • \\server\Drops\Services.Release\Services.Release_1.0.227.19779\.     - works 

So the answer is add a . (period) in the pipeline workflow component so the build location is $(PackageLocation). as opposed to $(PackageLocation) or to edit the PS1 file that is run to do some validation to strip out any trailing characters. I chose the later, making the edit

if ([string]::IsNullOrEmpty($BuildDirectory))
    {
        $buildDirectoryParameter = [string]::Empty
    } else
    {
        # make sure we remove any trailing slashes as the cause permission issues
        $BuildDirectory = $BuildDirectory.Trim()
        while ($BuildDirectory.EndsWith("\"))
        {
            $BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)
        }
        $buildDirectoryParameter = "/builddir:""$BuildDirectory"""
    }
   

Cannot find the TRX file even though it is present

Once the tests were running I still had an issue that even though TCM had run the tests, produced a .TRX file and published it’s contents back to TFS, the script claimed the file did not exist and so could not pass the test results back to Release Management.

The issue was the call being used to check for the file existence.

[System.IO.File]::Exists($testRunResultsTrxFileName)

As soon as I swapped to the recommended PowerShell way to check for files

Test-Path($testRunResultsTrxFileName)

it all worked.

‘Test run must be created with at least one test case’ error when using TCM

I have been setting up some integration tests as part of a release pipeline. I am using TCM.EXE to trigger tests from the command line. Something along the lines

TCM.exe run /create /title:"EventTests" /collection:"http://myserver:8080/tfs” /teamproject:myteamproject /testenvironment:"Integration" /builddir:\\server\Drops\Build_1.0.226.1975”  /include /planid:26989  /suiteid:27190 /configid:1

I kept getting the error

‘A test run must be created with at least one test case’

Strange thing was my test suite did contains a number of test, and they were marked as active.

The issue was actually the configid it was wrong, there is no easy way to check them from the UI. use the following command to get a list of valid IDs

TCM.exe configs /list   /collection:"http://myserver:8080/tfs” /teamproject:myteamproject

Id        Name
--------- ----------------------------------------------------------------
35        Windows 8.1 ARM
36        Windows 8.1 64bit
37        Windows 8.1 ATOM
38        Default configuration created @ 11/03/2014 12:58:15
39        Windows Phone 8.1

Your can now use the correct ID, not one you had to guess