But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Visual Studio crashes when trying to add an item to a TFS build workflow

There has for a long time been an issue that when you try to add a new activity to the toolbox when editing a TFS build workflow Visual Studio can crash. I have seen it many times and never got to the bottom of it. It seems to be machine specific, as one machine can work while another supposedly identical will fail, but I could never track down the issue.

Today I was on a machine that was failing, but …

But I found a workaround in a really old forum post. The workaround is to load the IDE from the command line with the /safemode flag

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv.exe /safemode

Once you do this you can edit the contents of our toolbox with crashes, and also your template if you wish. The best part is that once you exit the IDE and reload it as normal your new toolbox contents are still there.

No perfect, but a good workaround

Getting Release Management to fail a release when using a custom PowerShell component

If you have a custom PowerShell script you wish to run you can create a tool in release Management (Inventory > Tools) for the script which deploys the .PS1, PSM files etc. and defines the command line to run it.

The problem we hit was that our script failed, but did not fail the build step as the PowerShell.EXE running the script exited without error. The script had thrown an exception which was in the output log file, but it was marked as a completed step.

The solution was to use a try/catch in the .PS1  script that as well as writing a message to Write-Error also set the exit code to something other than 0 (Zero). So you end up with something like the following in your .PS1 file

param
(
[string]$Param1 ,
[string]$Param2 )

try
{
    # some logic here

} catch
{
    Write-Error $_.Exception.Message
    exit 1 # to get an error flagged so it can be seen by RM
}

Once this change was made an exception in the PowerShell caused the release step to fail as required. The output from the script appeared as the Command Output.

ALM Ranger’s release DevOps Guidance for PowerShell DSC – perfectly timed for DDDNorth

In a beautiful synchronicity the ALM Rangers DevOps guidance for PowerShell DSC has been release at at the same time as I am doing my DDDNorth session ‘What is Desired State Configuration and how does it help me?’

This Rangers project has been really interesting to work on, and provide much of the core of my session for DDDNorth.

Well worth a look if you want to create your own DSC resources.

"The handshake failed due to an unexpected packet format" with Release Management

Whilst configuring a Release Management 2013.3 system I came across a confusing error. All seemed OK, the server, client and deployment agents were all installed and seemed to be working, but when I tried to select a build to deploy from both the Team Projects and Build drop downs were empty.

image

A check of the Windows event log on the server showed the errors

The underlying connection was closed: An unexpected error occurred on a send
The handshake failed due to an unexpected packet format

Turns out the issue was an incorrectly set value when the Release Management server was configured. HTTPS had been incorrectly selected, in fact there was no SSL certificate on the box so HTTPS could not work

image

As this had been done in error we did not use HTTPS at any other point in the installation. We always used the URL http://typhoontfs:1000 . The strange part of the problem was that the only time this mistake caused a problem was for the Team Project drop down, everything else seemed fine, clients and deployment agents all could see the server.

Once the Release Management server was reconfigured with the correct HTTP setting all was OK

image

Cannot build a SSRS project in TFS build due to expired license

If you want to get your TFS build process to product SSRS RDL files you need to call the vsDevEnv custom activity to run Visual Studio (just like for SSIS packages). On our new TFS2013.3 based build agents this step started to fail, turns out the issue was not incorrect versions of DLLs or a some badly applied update, but that the license for Visual Studio on the build agent had expire.

I found it by looking at diagnostic logs in the TFS build web UI.

image

To be able to build BI project with Visual Studio you do need a licensed copy of Visual Studio on the build agent. You can use a trial license, but it will expire. Also remember if you license VS by logging in with your MSDN Live ID that too needs to be refreshed from time to time (that is what go me), so better to use a product key.

‘Unable to reconnect to database: Timeout expired’ error when using SQLPackage.exe to deploy to Azure SQL

I have trying to update a Azure hosted SQL DB using Release Management and the SSDT SQLPackage tools. All worked fine on my test Azure instance, but then I wanted to deploy to production I got the following error

*** An error occurred during deployment plan generation. Deployment cannot continue.
Failed to import target model [dbname]. Detailed message Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
The wait operation timed out

Turns out SQLPackage.exe was connecting OK, as if I entered an invalid password it gave a different error, so it had made the connection and then died.

Seems I am not alone in seeing this problem, and most people seem to suggest changing a timeout in the registry or your exported DACPAC. However, neither of these techniques worked for me.

I compare my test and production Azure DB instances, and found the issue. My test SQL DB was created using a SQL Export from the production Azure subscription imported into my MSDN one. I had done a quick ‘next > next > next’ import and the DB has been setup as an Standard (S2) service tier. The production DB had been running as the old retired Web service tier, but had recently been swapped to a Basic tier (it is a very small DB). When I re-imported my backup, but this time set it to be a Basic tier I got exactly the the same error message.

So on my test DB I changed it’s service tier from Basic to Standard (S0) and my deployment worked. The same solution work for my production DB.

Now the S0 instance is just over 2x the cost of a Basic , so if I was really penny pinching I could consider moving it back to Basic now the deployment is done. I did try this, and the deployment error returned; so I think it feels a false economy as want a stable release pipeline until Microsoft sort out why I cannot use SSDT to deploy to a basic instance

Experiences using a DD-WRT router with Hyper-V

I have been playing around with the idea of using a DD-WRT-V router on a Hyper-V VM to connect my local virtual machines to the Internet as discussed by Martin Hinshlewood in his blog post. I learned a few things that might be of use to others trying the same setup.

What I used to do

Prior to using the router I had been using three virtual switches on my Windows 8 Hyper-V setup with multiple network adaptors to connect both my VMs and host machine to the switches and networks

image

So I had

  • One internal virtual switch only accessible on my host machine and my VMs
  • Two external virtual switches
    • one linked to my physical Ethernet adaptor
    • the other linked to my physical WiFi adaptor

Arguably I could have had just one ‘public’ virtual switch and connected it to either my Ethernet or Wifi as needed. However, I found it easier to swap virtual switch in the VM settings rather than swap the network adaptor inside the virtual switch settings. I cannot really think of a compelling reason to pick one method over another, just person taste or habit I guess.

This setup had worked OK, if I needed to access a VM from my host PC I used the internal switch. This switch had no DHCP server on it, so I used the alternate configuration IP addresses assigned by Windows, managing machine IP addresses via a local hosts file. To allow the VMs to access the internet I added a second network adaptor to each VM which I bound to one of the externally connected switches, the choice being dependant which Internet connection I had at any given time.

However, all was not perfect, I have had problems with some Linux distributions running in Hyper-V with then not getting an IP address via DHCP over Wifi. There was also the complexity of having to add second network adaptors to each VM.

So would a virtual router help? I thought it worth a try, so I followed Martin’s process. But hit a few problems.

Setting up the DD-WRT router

As Martin said his post more work was needed to fully configure the router to allow external access. The problem I had for a long time was that as soon as I enabled the WAN port I seemed to lose connection. After much fiddling this was the process that worked for me

  1. Install the router as detailed in Martin’s post
  2. Link your internal Hyper-V switch to the first Ethernet (Eth0) port on the router VM. This seems a bit counter intuitive as the DD-WRT wiki says the first port is for the WAN – more on that later

    image
  3. Boot the router, your should be able to login on the address 192.168.1.1 as root with the password admin on both the console or via a web browser from your host PC
  4. On the basic setup tab (the default page) enable the WAN by selecting ‘Automatic Configuration (DHCP)’ and save the change

    It is at this point I kept getting disconnected. I then realised it was because the ports were being reassigned, at this point Eth0 had indeed become the WAN port and Eth1 the internal port
  5. So in Hyper-V manager
    • Re-assign the first Ethernet port (eth0) to external hyper-v switch (in turn connected to your Internet connection)
    • Assign the second Ethernet port (Eth1) to the internal virtual switch

      image
  6. You can now re-connect to 192.168.1.1 in a browser from your host machine to complete your configuration

So now all my VMs connected to the virtual switch could get a 192.168.1.x address via DHCP (using their single network adaptors) but they could not see the internet. However, on the plus side DHCP seems to work OK for all operating Systems, so my Linux issues seemed to be fixed

It is fair to say I now had a fairly complex network, so it was no unsurprising I had routing issues.

image

The issue seems to have been that the VMs were not being passed the correct default router and DNS entries by DHCP. I had expect this to be set by default by the router, but it was not the case. They seem to need to be set by hand as shown below.

image

Once these were set, the change saved on the router and the VMs renewed their DHCP settings they had Internet access

At one point I thought I had also lost Internet access from my host PC, or the Internet access was much slower. I though I had developed a routing loop with all traffic passing through the router whether it was needed or not. However, once the above router gateway IP settings were set these problem when away.

When I checked my Windows 8 host’s routing table using netstat –r it showed two default roots (0.0.0.0), my primary one (192.168.0.1) my home router and my Hyper-V router (192.168.1.1 ). The second one had a much higher metric, so should not be used unless sending packets to the 192.168.100.x network, all other traffic should go out the primary link it should be using.

image

It was at this time I noticed the problem of getting a DHCP based IP address from Wifi had not gone away completely. If I had my router’s WAN port connected to my WiFI virtual switch, depending on the model/setup of WiFI router DHCP worked sometimes and sometimes not. I think this was mostly down to an authentication issue; not a major issues as thus far the only place I have a problem is our office where our WiFi is secured via radius server based AD authentication. Here I just switched to using either our guest WiFi or our Ethernet which both worked.

So is this a workable solution?

It seems to be OK this far, but there were more IP address/routing issues during the setup than I would like, you need to know your IPV4.

There are many option on the DD-WRT console I am unfamiliar with. By default it is running just like a home one, in a Network Address Translation (NAT) mode. This has the advantage of hiding the internal switch, but I was thinking would it be easier to run the DD-WRT as simple router?

The problem with that mode of operation is I need to make sure my internal virtual LAN  does  not conflict with anything on networks I connect to, and with automated router protocols such as RIP could get interesting fast; making me a few enemies with IT managers who networks I connect too.

A niggle is that whenever I connect my PC to a new network I need to make sure I remember do a DHCP renew of my WAN port (Status > WAN > DHCP Renew), it does  not automatically detect the change in connection.

Also I still need to manage my VMs IP addresses with a host file on the host Windows PC. As I don’t want to edit this file too often, it a good idea to increase the DHCP lease time on the router (Setup > Basic Setup) to a few days instead of a day.

As to how well this work we shall see, but it seems OK for now

Version stamping Windows 8 Store App manifests in TFS build

We have for a long time used the TFSVersion custom build activity to stamp all our TFS builds with a unique version number that matches out build number. However, this only edits the AssemblyInfo.cs file. As we are now building more and more Windows 8 Store Apps we also need to edit the XML in the Package.appxmanifest files used to build the packages too. Just like a Wix MSI project it is a good idea the package version matches some aspect of the assemblies it contains. We need to automate the update of this manifest as people too often forget to increment the version, causing confusion all down the line.

Now I could have written a new TFS custom activity to do the job, or edited the existing one, but both options seemed a poor choice. We all know that custom activity writing is awkward and a pain to support going forward. So I decided to use the hooks in the 2013 generation build process template to just call a custom PowerShell script to do the job.

I added a PreBuildScript.PS1 file as a solution item to my solution.

I placed the following code in the file. It uses the TFS environment variables to get the build location and version; using these to find and edit the manifest files. The only gotcha is files on the build box are read only (it is a server workspace) so the manifest file has to be set it to allow it to be written back too.

# get the build number, we assume the format is Myproject.Main.CI_1.0.0.18290
# where the version is set using the TFSVersion custom build activity (see other posts)

$buildnum = $env:TF_BUILD_BUILDNUMBER.Split('_')[1]
# get the manifest file paths
$files = Get-ChildItem -Path $env:TF_BUILD_BUILDDIRECTORY -Filter "Package.appxmanifest" -Recurse
foreach ($filepath in $files)
{
    Write-Host "Updating the Store App Package '$filepath' to version ' $buildnum '"
   # update the identity value
  
$XMLfile=NEW-OBJECT XML
    $XMLfile.Load($filepath.Fullname)
    $XMLFile.Package.Identity.Version=$buildnum
   # set the file as read write
    Set-ItemProperty $filepath.Fullname -name IsReadOnly -value $false
    $XMLFile.save($filepath.Fullname)
}

Note that any output sent via Write-Host will only appear in the diagnostic log of TFS. If you use Write-Error (or errors are thrown) these messages will appear in the build summary, but the build will not fail, but will be marked as a partial success.

Once this file was checked in i was able to reference the file in the build template

image

The build could not be run and got my Windows 8 Store packages with the required version number

Updated blog server to BlogEngine.NET 3.1

Last night I upgraded this blog server to BlogEngine.NET 3.1. I used the new built in automated update tool, in an offline backup copy of course.

It did most of the job without any issues. The only extra things I needed to do was

  • Removed a <add name="XmlRoleProvider" …> entry in the web.config. I have had to do this before on every install.
  • Run the SortOrderUpdate.sql script to add the missing column and index (see issue 12543)

Once done and tested locally I upload the tested site to my production server. Just a point to note, that the upgrade creates some backup ZIPs of your site before the upgrade, you don’t need to copy these around as they are large.