But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Build arguments are not returned for a build definition via the TFS API if they are left as default values

We use my TFS Alerts DSL to perform tasks when our TFS build complete, one of these is a job to increment the minor version number and reset the version start date (the value that generates third field – days since a point in time) if a build is set to the quality ‘release’ e.g. 1.2.99.[unique build id] where 99 is the days count since some past date could change to 1.3.0.[unique build id] (see this old post on how we do this in the build process)

I have just found a bug (feature?) in the way the DSL does this; turns out if you did not set the major and minor version argument values in the build editor (you just left them to their default values of 1 and 0) then the DSL fails as defaulted argument are not returned in the property set of the build definiation we process in the DSL. You would expect to get a 0 back, but you in fact get a null.

So if you have a build where you expect the version to increment and it does not, check the build definition and make sure the MajorVersion, MinorVersion (or whatever you called them) and version start date are all in bold

 

clip_image002

I have updated the code on Codeplex so that it gives a better error message in the event log if problem occurs with a build.

Fix for timeout exporting a SQL Azure DB using PowerShell or SQLPackage.exe

I have been trying to export a SQL Azure DB as a .BACPAC using the command line

"C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\SqlPackage.exe"
                              /action:Export
                             /sourceservername:myserver.database.windows.net
                             /sourcedatabasename:websitecontentdb
                             /sourceuser:sa@myserver /sourcepassword:<password> /targetfile:db.bacpac

The problem is the command times out after around an hour, at the ‘Extracting schema from database’ stage.

I got exactly the same issue if I use PowerShell as discussed in Sandrino Di Mattia’s post.

The issue is the Azure service  tier level I am running the SQL DB on.

image

If it is set to basic I get the error, if it is set to standard (even at the lowest settings) it works, and in my case the backup takes a couple of minutes.

I have seen similar problem trying to deploy a DACPAC to SQL Azure, and as I said in that post

‘Now the S0 instance is just over 2x the cost of a Basic , so if I was really penny pinching I could consider moving it back to Basic now the deployment is done.’

So the choice is mine, change the tier each time I want a export, or pay the extra cost

Wrong package location when reusing a Release Management component

Whilst setting up a new agent based deployment pipeline in Release Management I decided I to reuse an existing component as it already had the correct package location set and correct transforms for the MSDeploy package. Basically this pipeline was a new copy of an existing website with different branding (css file etc.), but the same configuration options.

I had just expected this work, but I kept getting ‘file not found’ errors when MSDeploy was run. On investigation I found that the package location for the component was wrong, it was the build drop root, not the sub folder I had specified.

image 

I have no idea why.

The fix was to copy the component, and use this copy in the pipeline. It is probably what I should have done anyway, as I expect this web site to diverge from original one, so I will need to edit the web.config transforms, but not something I thought I would have had to do now to get it working.

Fix for cannot run Windows 8.1 units test on a TFS 2013 Build Agent

I recently hit a problem that on one of our TFS 2013 build agents we could not run Windows 8.1 unit tests. Now as we know the build agent needs some care and attention to build Windows 8.1 at all, but we had followed this process. However, we still saw the issue that the project compiled but the tests failed with the error

Unit tests for Windows Store apps cannot be run with Limited User Account disabled. Enable it to run tests.’

image

I checked UAC settings and the build accounts rights (it ran as a local admin) all to no effect.

The answer it seems, thanks to the product group for the pointer, is that you have to make sure of the registry setting

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

"EnableLUA" =  1

On my failing VM this was set to zero.

I then had to reboot the the VM and also delete all contents of the c:\builds folder on my VM as due to the chance in UAC setting these old files had become read only to the build process.

Once this was all done my Windows 8.1 builds work correctly. Hope this post saves some other people some time

Living with a DD-WRT virtual router – three months and one day on (static DHCP leases)

image

When using a DD-WRT virtual router, I have realised it is worth setting static a MAC address in Hyper-V and DHCP lease on the router for any server  VMs  you want access to from your base system OS. In my case this is TFS demo VM a connect to all the time.

If you don’t do this the address of the VM seems to vary more than you might expect. So you keep having to edit the HOSTS file on your base OS to reference the VM by name.

You set the static MAC address in the Hyper-V setting

image

And the DHCP lease in the router Services tab, to make it a permanent lease leave the time  field empty

image

And finally the hosts file add an entry

# For the VM 00:15:5d:0b:27:05
192.168.1.99        typhoontfs

On down side of this is that if you are using snaphots as I am to address DHCP Wifi issues, you need to add the lease to any old snapshots you have, but once it is set there should be no more host file editing

Living with a DD-WRT virtual router – three months on

I posted in the past on my experience with DD-WRT router running in Hyper-V to allow my VMs internet access. A couple of months on I am still using it and I think have got around the worst of the issues.

The big problem is not with the DD-WRT router, but the way Hyper-V virtual switches use WiFi for some operating systems. Basically the summary is DHCP does not work for Linux VMs.

The best solution I have found to this problem is to use Hyper-V snapshots in which I hard code the correct IP settings for various networks, thus removing the need for DHCP.

At present I have three snapshots that I swap between as needed

image

  • One is set to use DHCP – I use this when my ‘external’ virtual switch is linked to a non-WIfi adaptor, usually the Ethernet in the office
  • One is hard coded for an IP address on my home router’s network, with suitable gateway and DNS setting
  • The final one is hard coded for my phone when it is being a Mifi

I can add  more as I need them, but as I find I am using hotel and client Wifi less and less as I am on an ‘all you can eat’ 4G mobile contract, I doubt I will need many more.

Seems to be working, i will report back if I learn more

When trying to load Office document from SharePoint I keep ending up in the Office Web Application

Whenever I tried to load an Office 2013 document from our SharePoint 2010 instance I kept ending up in the Office Web Application, the Office application was not being launched.

If I tried the use the ‘Open in Word’ context menu I got the following error (and before you ask yes I was in IE, IE11 in fact, and Office 2013 was installed)

image

My PC has been build of our standard System Center managed image, others using the same best image seemed OK, so what had gone wrong for me?

The launching of Office application features is managed by the ‘SharePoint OpenDocument Class’ IE add in (IE > Settings > Manage Add-ins). On my PC this whole add-in was missing, don’t know why.

image

The fix it turns out was to got into Control Panel > Add remove Programs > Office 2013 > Change and do a repair and a reboot. Once this was done Office launched as expected.

Fix for ‘An unexpected error occurred. Close the windows and try again’ error adding Azure subscription to Visual Studio Release Management Tools

In preparation for my Techdays session next month, I have been sorting demos using the various Release Management clients.

When I tried to create a release from within Visual Studio using the ‘Release Management tools for Visual Studio I found I could not add my Azure subscriptions. I saw the error ‘An unexpected error occurred. Close the windows and try again’

image

I could download and import the subscription file, it showed the available storage accounts, but when I pressed save I got the rather unhelpful error ‘Object reference not set to an instance of an object’

image

Turns out the issue was a simple one, rights. The LiveID I had signed into Visual Studio as had no rights for Release Management on the VSO account running the Release Management service, even though it was a TPC administrator.

It is easier to understand the problem in the Release Management client. When I tried to set the Release Management Server Url (RM > Administration > Settings) to the required VSO Url as the LiveID I was using in Visual Studio I got the nice clear error shown below.

image

The solution was in the Release Management client to use the LiveID of the VSO account owner. I could then connect the Url in the Release Management client and then add my previously failing LiveID as a user for the release service.

image

Once this was done I was able to use this original LiveID in Visual Studio without a problem.

How to edit registered Release Management deployment agent IP addresses if a VMs IP address changes

I have posted in the past that we have a number of agent based deployments using Release Management 2013.4 that point to network isolated Lab Management environments. Over Christmas we did some maintenance on our underlying Hyper-V servers, so everything got fully stopped and restarted. When the network isolated environment were restarted their DHCP assigned IP addresses on our company domain all changed (maybe we should have had longer DHCP lease times set?)

image

Worst of all some were reused and were actually swapped between environments, so an IP address that used to connect to a Server1 in environment Lab1 could be assigned to Server2 in environment Lab2. So basically all our deployments failed, usually because there server could not connect to the agent, but sometimes because the wrong VM responded.

Now for general Lab Management operations this was not an issue; inside the environment nothing had changed, the network range was still 192.168.23.x and externally SCVMM, MTM and the Test Controllers all knew what is going on and sorted themselves out. The problem was the Release Management deployment agent’s registration with the Release Management server. As I detailed in my previous post you have manually register the agents using shadow accounts. This means they are registered with their IP address at the time of registration, it does not change if the VMs IP address is reassigned with DHCP. It is up to you to fix it.

But how?

And that is the problem, there is no way to edit the IP addresses of the registered server’s deployment agents inside the Release Management admin tool. The only option I could find would be deleted the registered server and re-add them, but this requires them to be removed from any release pipelines. Something I did not want to do, to much work when I just wanted to fix an IP address.

The solution I found was to edit the IPAddress column in the underlying Server table in the ReleaseManagement DB. I did this with SQL Management Studio, nothing special. The only thing to note is that you cannot have duplicate IP addresses, so they had to be edited in an order to avoid duplication, using a temporary IP address during the edit process as I shuffled addresses around.

image

Once this was done everything leapt into life. I did not even need to restart the Release Management Server, just press the refresh button on the Server tab and saw all the agents had reconnected.

image

So a good dirty fix, but something I would have hoped would have been easier if the tools provided a means to edit the IP addresses

Note: This problem is specific to agent based deployment in Release Management. If you are using vNext DSC based deployment to network isolated VMs are registered using their DNS names on the corporate LAN e.g. VSLM-1344-e7858e28-77cf-4163-b6ba-1df2e91bfcab.lab.blackmarble.co.uk so the problem does not occur

Failing to unblock downloaded ZIP files causes really strange errors

Twice recently I have hit the problem that I needed to a unblock ZIP files downloaded from a VSO source repository before I extracted the contents. One was a DSC modules, the other a PowerShell script with associated .NET assemblies.

In both cases the error messages I got were confusing and misleading. In the case of the DSC module the error was "cannot be loaded because you opted not to run this software now". The other project just suffered mixed .NET assembly loading errors.

So really try to remember after a download to right click into the properties and make sure the ZIP file does not need unblocking

image

If it does, click the unblock button prior to extracting the ZIP, else you will see similar strange errors to the ones I have seen