But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

My DSC session is up at TechDays Online 2015 On-Demand

A couple of weeks ago I presented on DSC and Release Management as part of the Microsoft UK TechDays Online 2015 event. All the sessions from this three day event are now available on demand at TechDays Online 2015 on-demand sessions.

You do seem to have to register/login to see the content, so I can’t deep link to my session, but browsing the catalogue is a good idea there are some great sessions

Build arguments are not returned for a build definition via the TFS API if they are left as default values

We use my TFS Alerts DSL to perform tasks when our TFS build complete, one of these is a job to increment the minor version number and reset the version start date (the value that generates third field – days since a point in time) if a build is set to the quality ‘release’ e.g. 1.2.99.[unique build id] where 99 is the days count since some past date could change to 1.3.0.[unique build id] (see this old post on how we do this in the build process)

I have just found a bug (feature?) in the way the DSL does this; turns out if you did not set the major and minor version argument values in the build editor (you just left them to their default values of 1 and 0) then the DSL fails as defaulted argument are not returned in the property set of the build definiation we process in the DSL. You would expect to get a 0 back, but you in fact get a null.

So if you have a build where you expect the version to increment and it does not, check the build definition and make sure the MajorVersion, MinorVersion (or whatever you called them) and version start date are all in bold

 

clip_image002

I have updated the code on Codeplex so that it gives a better error message in the event log if problem occurs with a build.

Fix for timeout exporting a SQL Azure DB using PowerShell or SQLPackage.exe

I have been trying to export a SQL Azure DB as a .BACPAC using the command line

"C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\SqlPackage.exe"
                              /action:Export
                             /sourceservername:myserver.database.windows.net
                             /sourcedatabasename:websitecontentdb
                             /sourceuser:sa@myserver /sourcepassword:<password> /targetfile:db.bacpac

The problem is the command times out after around an hour, at the ‘Extracting schema from database’ stage.

I got exactly the same issue if I use PowerShell as discussed in Sandrino Di Mattia’s post.

The issue is the Azure service  tier level I am running the SQL DB on.

image

If it is set to basic I get the error, if it is set to standard (even at the lowest settings) it works, and in my case the backup takes a couple of minutes.

I have seen similar problem trying to deploy a DACPAC to SQL Azure, and as I said in that post

‘Now the S0 instance is just over 2x the cost of a Basic , so if I was really penny pinching I could consider moving it back to Basic now the deployment is done.’

So the choice is mine, change the tier each time I want a export, or pay the extra cost

Wrong package location when reusing a Release Management component

Whilst setting up a new agent based deployment pipeline in Release Management I decided I to reuse an existing component as it already had the correct package location set and correct transforms for the MSDeploy package. Basically this pipeline was a new copy of an existing website with different branding (css file etc.), but the same configuration options.

I had just expected this work, but I kept getting ‘file not found’ errors when MSDeploy was run. On investigation I found that the package location for the component was wrong, it was the build drop root, not the sub folder I had specified.

image 

I have no idea why.

The fix was to copy the component, and use this copy in the pipeline. It is probably what I should have done anyway, as I expect this web site to diverge from original one, so I will need to edit the web.config transforms, but not something I thought I would have had to do now to get it working.

Fix for cannot run Windows 8.1 units test on a TFS 2013 Build Agent

I recently hit a problem that on one of our TFS 2013 build agents we could not run Windows 8.1 unit tests. Now as we know the build agent needs some care and attention to build Windows 8.1 at all, but we had followed this process. However, we still saw the issue that the project compiled but the tests failed with the error

Unit tests for Windows Store apps cannot be run with Limited User Account disabled. Enable it to run tests.’

image

I checked UAC settings and the build accounts rights (it ran as a local admin) all to no effect.

The answer it seems, thanks to the product group for the pointer, is that you have to make sure of the registry setting

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

"EnableLUA" =  1

On my failing VM this was set to zero.

I then had to reboot the the VM and also delete all contents of the c:\builds folder on my VM as due to the chance in UAC setting these old files had become read only to the build process.

Once this was all done my Windows 8.1 builds work correctly. Hope this post saves some other people some time

Living with a DD-WRT virtual router – three months and one day on (static DHCP leases)

Updated 28 Feb 2015 – Added bit on static addresses

image

When using a DD-WRT virtual router, I have realised it is worth setting static a MAC address in Hyper-V and DHCP lease on the router for any server  VMs  you want access to from your base system OS. In my case this is TFS demo VM a connect to all the time.

If you don’t do this the address of the VM seems to vary more than you might expect. So you keep having to edit the HOSTS file on your base OS to reference the VM by name.

You set the static MAC address in the Hyper-V setting

image

And the DHCP lease in the router Services tab, to make it a permanent lease leave the time  field empty

image

And finally the hosts file add an entry

# For the VM 00:15:5d:0b:27:05
192.168.1.99        typhoontfs

On down side of this is that if you are using snaphots as I am to address DHCP Wifi issues, you need to add the lease to any old snapshots you have, but once it is set there should be no more host file editing

Updated 28 Feb 2015

I have still found problems with strange routes in my routing table due to the internal switch issuing an address (and gateway) via DHCP; these seem to cause problems for my Microsoft direct access (a VPN) . Today I had the realisation I can avoid this problem by using a static address for my host PC’s connection to the internal router e.g. 192.168.1.50 set on the Windows adaptor, as opposed to DHCP. By making it static I avoid the issue of extra routes or DNS entries by simply not adding them.

image

Living with a DD-WRT virtual router – three months on

I posted in the past on my experience with DD-WRT router running in Hyper-V to allow my VMs internet access. A couple of months on I am still using it and I think have got around the worst of the issues.

The big problem is not with the DD-WRT router, but the way Hyper-V virtual switches use WiFi for some operating systems. Basically the summary is DHCP does not work for Linux VMs.

The best solution I have found to this problem is to use Hyper-V snapshots in which I hard code the correct IP settings for various networks, thus removing the need for DHCP.

At present I have three snapshots that I swap between as needed

image

  • One is set to use DHCP – I use this when my ‘external’ virtual switch is linked to a non-WIfi adaptor, usually the Ethernet in the office
  • One is hard coded for an IP address on my home router’s network, with suitable gateway and DNS setting
  • The final one is hard coded for my phone when it is being a Mifi

I can add  more as I need them, but as I find I am using hotel and client Wifi less and less as I am on an ‘all you can eat’ 4G mobile contract, I doubt I will need many more.

Seems to be working, i will report back if I learn more