But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Cannot see a TFS drops location from inside a network isolated environment for Release Management

I have posted before about using Release Management with Lab Management network isolation. They key is that you must issue a NET USE command at the start of the pipeline to allow the VMs in the isolated environment the ability to see the TFS drops location.

I hit a problem today that a build failed even though I had issued the NET USE.

I got the error

Package location '\\store\drops\Sabs.Main.CI\Sabs.Main.CI_2.5.178.19130\\' does not exist or Deployer user does not have access.

Turns out the problem was I had issued the NET USE as \\store.blackmarble.co.uk\drops not \\store\drops it is vital the names match up. Once I changed this my NET USE all was OK

Ordering rows that use the format 1.2.3 in a SQL query

Whilst working on a SSRS based report I had hit a sort order problem. Entries in a main report tablix needed to be sorted by their OrderNumber but this was in the form 1.2.3; so a neither a numeric or alpha sort gave the correct order i.e. a numeric sort fails as 1.2.3 is not a number and an alpha sort worked but give the incorrect order 1.3, 1.3.1, 1.3.10, 1.3.11, 1.3.12, 1.3.2.

When I checked the underlying SQL it turned out the OrderNumber was being generated, it was not a table column. The raw data was in a single table that contained all the leaf nodes in the hierarchy, the returned data was built by a SPROC using a recursive call.

The solution was to also calculate a SortOrder as well as the OrderNumber. I did this using a Power function on each block of the OrderNumber and added the results together. In the code shown below we can have any number of entries in the first block and up to 999 entries in the second or third block. You could have more by altering the Power function parameters

declare @EntryID as nvarchar(50) = ‘ABC1234';

WITH SummaryList AS
    (
        SELECT
        NI.ItemID,
        NI.CreationDate,
        'P' as ParentOrChild,
        cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as nvarchar(100)) as OrderNumber,
        cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) *  Power(10,6) as int) as SortOrder
        FROM dbo.NotebookItem AS NI
        WHERE NI.ParentID IS NULL AND NI.EntryID = @EntryID

        UNION ALL

        SELECT
        NI.ItemID,
        NI.CreationDate,
        'L' as ParentOrChild,
        cast(SL.OrderNumber + '.' + cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as nvarchar(100)) as nvarchar(100)) as OrderNumber,
        SL.SortOrder + (cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as int) * power(10 ,6 - (3* LEN(REPLACE(sl.OrderNumber, '.', ''))))) as SortOrder
        FROM dbo.NotebookItem AS NI
        INNER JOIN SummaryList as SL
            ON NI.ParentID = SL.ItemID
    )

    SELECT
        SL.ItemID,
        SL.ParentOrChild,
        SL.CreationDate,
        SL.OrderNumber,
        SL.SortOrder
    FROM SummaryList AS SL
    ORDER BY SL.SortOrder

This query returns the following with all the data correctly ordered

ItemID ParentOrChild CreationDate OrderNumber SortOrder
22F72F9E-E34C-45AB-A4D9-C7D9B742CD2C P 29 October 2014 1 1000000
E0B74D61-4B69-46B0-B0A9-F08BE2886675 L 29 October 2014 1.1 1001000
CB90233C-4940-4312-81D1-A26CB540DF2A L 29 October 2014 1.2 1002000
35CCC2A1-E00F-43C6-9CB3-732342EE18DA L 29 October 2014 1.3 1003000
7A920ABE-A2E2-4CF1-B36E-DE177A7B8681 L 29 October 2014 1.3.1 1003001
C5E863A1-5A92-4F64-81C6-6946146F2ABA L 29 October 2014 1.3.2 1003002
23D89CFF-C9A3-405E-A7EE-7CAACCA58CC2 L 29 October 2014 1.3.3 1003003
CE4F9F6B-3A58-4F78-9C1F-4780883F6995 L 29 October 2014 1.3.4 1003004
8B2A137F-C311-419A-8812-76E87D8CFA40 L 29 October 2014 1.3.5 1003005
F8487463-302E-4225-8A06-7C8CDCC23B45 L 29 October 2014 1.3.6 1003006
D365A402-D3CC-4242-B1B9-356FB41AABC1 L 29 October 2014 1.3.7 1003007
DFD4D688-080C-4FF0-B1D0-EBE63B6F99FD L 29 October 2014 1.3.8 1003008
272A46C6-E326-47E8-AEE4-952AF746A866 L 29 October 2014 1.3.9 1003009
F073AFFA-F9A1-46ED-AC4B-E92A7160EB21 L 29 October 2014 1.3.10 1003010
140744E7-4950-43F8-BA0C-0E541550F14B L 29 October 2014 1.3.11 1003011
93AA3C05-E95A-4201-AE03-190DDBF31B47 L 29 October 2014 1.3.12 1003012
5CED791D-4695-440F-ABC4-9127F1EE2A55 L 29 October 2014 1.4 1004000
FBC38F00-E2E8-4724-A716-AE419907A681 L 29 October 2014 1.5 1005000
2862FED9-8916-4139-9577-C858F75C768A P 29 October 2014 2 2000000
8265A2BE-D2DD-4825-AE0A-7930671D4641 P 29 October 2014 3 3000000

Visual Studio crashes when trying to add an item to a TFS build workflow

There has for a long time been an issue that when you try to add a new activity to the toolbox when editing a TFS build workflow Visual Studio can crash. I have seen it many times and never got to the bottom of it. It seems to be machine specific, as one machine can work while another supposedly identical will fail, but I could never track down the issue.

Today I was on a machine that was failing, but …

But I found a workaround in a really old forum post. The workaround is to load the IDE from the command line with the /safemode flag

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv.exe /safemode

Once you do this you can edit the contents of our toolbox with crashes, and also your template if you wish. The best part is that once you exit the IDE and reload it as normal your new toolbox contents are still there.

Not perfect, but a good workaround

Getting Release Management to fail a release when using a custom PowerShell component

If you have a custom PowerShell script you wish to run you can create a tool in release Management (Inventory > Tools) for the script which deploys the .PS1, PSM files etc. and defines the command line to run it.

The problem we hit was that our script failed, but did not fail the build step as the PowerShell.EXE running the script exited without error. The script had thrown an exception which was in the output log file, but it was marked as a completed step.

The solution was to use a try/catch in the .PS1  script that as well as writing a message to Write-Error also set the exit code to something other than 0 (Zero). So you end up with something like the following in your .PS1 file

param
(
[string]$Param1 ,
[string]$Param2 )

try
{
    # some logic here

} catch
{
    Write-Error $_.Exception.Message
    exit 1 # to get an error flagged so it can be seen by RM
}

Once this change was made an exception in the PowerShell caused the release step to fail as required. The output from the script appeared as the Command Output.

"The handshake failed due to an unexpected packet format" with Release Management

Whilst configuring a Release Management 2013.3 system I came across a confusing error. All seemed OK, the server, client and deployment agents were all installed and seemed to be working, but when I tried to select a build to deploy from both the Team Projects and Build drop downs were empty.

image

A check of the Windows event log on the server showed the errors

The underlying connection was closed: An unexpected error occurred on a send
The handshake failed due to an unexpected packet format

Turns out the issue was an incorrectly set value when the Release Management server was configured. HTTPS had been incorrectly selected, in fact there was no SSL certificate on the box so HTTPS could not work

image

As this had been done in error we did not use HTTPS at any other point in the installation. We always used the URL http://typhoontfs:1000 . The strange part of the problem was that the only time this mistake caused a problem was for the Team Project drop down, everything else seemed fine, clients and deployment agents all could see the server.

Once the Release Management server was reconfigured with the correct HTTP setting all was OK

image

Cannot build a SSRS project in TFS build due to expired license

If you want to get your TFS build process to product SSRS RDL files you need to call the vsDevEnv custom activity to run Visual Studio (just like for SSIS packages). On our new TFS2013.3 based build agents this step started to fail, turns out the issue was not incorrect versions of DLLs or a some badly applied update, but that the license for Visual Studio on the build agent had expire.

I found it by looking at diagnostic logs in the TFS build web UI.

image

To be able to build BI project with Visual Studio you do need a licensed copy of Visual Studio on the build agent. You can use a trial license, but it will expire. Also remember if you license VS by logging in with your MSDN Live ID that too needs to be refreshed from time to time (that is what go me), so better to use a product key.

‘Unable to reconnect to database: Timeout expired’ error when using SQLPackage.exe to deploy to Azure SQL

I have trying to update a Azure hosted SQL DB using Release Management and the SSDT SQLPackage tools. All worked fine on my test Azure instance, but then I wanted to deploy to production I got the following error

*** An error occurred during deployment plan generation. Deployment cannot continue.
Failed to import target model [dbname]. Detailed message Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
The wait operation timed out

Turns out SQLPackage.exe was connecting OK, as if I entered an invalid password it gave a different error, so it had made the connection and then died.

Seems I am not alone in seeing this problem, and most people seem to suggest changing a timeout in the registry or your exported DACPAC. However, neither of these techniques worked for me.

I compare my test and production Azure DB instances, and found the issue. My test SQL DB was created using a SQL Export from the production Azure subscription imported into my MSDN one. I had done a quick ‘next > next > next’ import and the DB has been setup as an Standard (S2) service tier. The production DB had been running as the old retired Web service tier, but had recently been swapped to a Basic tier (it is a very small DB). When I re-imported my backup, but this time set it to be a Basic tier I got exactly the the same error message.

So on my test DB I changed it’s service tier from Basic to Standard (S0) and my deployment worked. The same solution work for my production DB.

Now the S0 instance is just over 2x the cost of a Basic , so if I was really penny pinching I could consider moving it back to Basic now the deployment is done. I did try this, and the deployment error returned; so I think it feels a false economy as want a stable release pipeline until Microsoft sort out why I cannot use SSDT to deploy to a basic instance

Experiences using a DD-WRT router with Hyper-V

I have been playing around with the idea of using a DD-WRT-V router on a Hyper-V VM to connect my local virtual machines to the Internet as discussed by Martin Hinshlewood in his blog post. I learned a few things that might be of use to others trying the same setup.

What I used to do

Prior to using the router I had been using three virtual switches on my Windows 8 Hyper-V setup with multiple network adaptors to connect both my VMs and host machine to the switches and networks

image

So I had

  • One internal virtual switch only accessible on my host machine and my VMs
  • Two external virtual switches
    • one linked to my physical Ethernet adaptor
    • the other linked to my physical WiFi adaptor

Arguably I could have had just one ‘public’ virtual switch and connected it to either my Ethernet or Wifi as needed. However, I found it easier to swap virtual switch in the VM settings rather than swap the network adaptor inside the virtual switch settings. I cannot really think of a compelling reason to pick one method over another, just person taste or habit I guess.

This setup had worked OK, if I needed to access a VM from my host PC I used the internal switch. This switch had no DHCP server on it, so I used the alternate configuration IP addresses assigned by Windows, managing machine IP addresses via a local hosts file. To allow the VMs to access the internet I added a second network adaptor to each VM which I bound to one of the externally connected switches, the choice being dependant which Internet connection I had at any given time.

However, all was not perfect, I have had problems with some Linux distributions running in Hyper-V with then not getting an IP address via DHCP over Wifi. There was also the complexity of having to add second network adaptors to each VM.

So would a virtual router help? I thought it worth a try, so I followed Martin’s process. But hit a few problems.

Setting up the DD-WRT router

As Martin said his post more work was needed to fully configure the router to allow external access. The problem I had for a long time was that as soon as I enabled the WAN port I seemed to lose connection. After much fiddling this was the process that worked for me

  1. Install the router as detailed in Martin’s post
  2. Link your internal Hyper-V switch to the first Ethernet (Eth0) port on the router VM. This seems a bit counter intuitive as the DD-WRT wiki says the first port is for the WAN – more on that later

    image
  3. Boot the router, your should be able to login on the address 192.168.1.1 as root with the password admin on both the console or via a web browser from your host PC
  4. On the basic setup tab (the default page) enable the WAN by selecting ‘Automatic Configuration (DHCP)’ and save the change

    It is at this point I kept getting disconnected. I then realised it was because the ports were being reassigned, at this point Eth0 had indeed become the WAN port and Eth1 the internal port
  5. So in Hyper-V manager
    • Re-assign the first Ethernet port (eth0) to external hyper-v switch (in turn connected to your Internet connection)
    • Assign the second Ethernet port (Eth1) to the internal virtual switch

      image
  6. You can now re-connect to 192.168.1.1 in a browser from your host machine to complete your configuration

So now all my VMs connected to the virtual switch could get a 192.168.1.x address via DHCP (using their single network adaptors) but they could not see the internet. However, on the plus side DHCP seems to work OK for all operating Systems, so my Linux issues seemed to be fixed

It is fair to say I now had a fairly complex network, so it was no unsurprising I had routing issues.

image

The issue seems to have been that the VMs were not being passed the correct default router and DNS entries by DHCP. I had expect this to be set by default by the router, but it was not the case. They seem to need to be set by hand as shown below.

image

Once these were set, the change saved on the router and the VMs renewed their DHCP settings they had Internet access

At one point I thought I had also lost Internet access from my host PC, or the Internet access was much slower. I though I had developed a routing loop with all traffic passing through the router whether it was needed or not. However, once the above router gateway IP settings were set these problem when away.

When I checked my Windows 8 host’s routing table using netstat –r it showed two default roots (0.0.0.0), my primary one (192.168.0.1) my home router and my Hyper-V router (192.168.1.1 ). The second one had a much higher metric, so should not be used unless sending packets to the 192.168.100.x network, all other traffic should go out the primary link it should be using.

image

It was at this time I noticed the problem of getting a DHCP based IP address from Wifi had not gone away completely. If I had my router’s WAN port connected to my WiFI virtual switch, depending on the model/setup of WiFI router DHCP worked sometimes and sometimes not. I think this was mostly down to an authentication issue; not a major issues as thus far the only place I have a problem is our office where our WiFi is secured via radius server based AD authentication. Here I just switched to using either our guest WiFi or our Ethernet which both worked.

So is this a workable solution?

It seems to be OK this far, but there were more IP address/routing issues during the setup than I would like, you need to know your IPV4.

There are many option on the DD-WRT console I am unfamiliar with. By default it is running just like a home one, in a Network Address Translation (NAT) mode. This has the advantage of hiding the internal switch, but I was thinking would it be easier to run the DD-WRT as simple router?

The problem with that mode of operation is I need to make sure my internal virtual LAN  does  not conflict with anything on networks I connect to, and with automated router protocols such as RIP could get interesting fast; making me a few enemies with IT managers who networks I connect too.

A niggle is that whenever I connect my PC to a new network I need to make sure I remember do a DHCP renew of my WAN port (Status > WAN > DHCP Renew), it does  not automatically detect the change in connection.

Also I still need to manage my VMs IP addresses with a host file on the host Windows PC. As I don’t want to edit this file too often, it a good idea to increase the DHCP lease time on the router (Setup > Basic Setup) to a few days instead of a day.

As to how well this work we shall see, but it seems OK for now

Version stamping Windows 8 Store App manifests in TFS build

We have for a long time used the TFSVersion custom build activity to stamp all our TFS builds with a unique version number that matches out build number. However, this only edits the AssemblyInfo.cs file. As we are now building more and more Windows 8 Store Apps we also need to edit the XML in the Package.appxmanifest files used to build the packages too. Just like a Wix MSI project it is a good idea the package version matches some aspect of the assemblies it contains. We need to automate the update of this manifest as people too often forget to increment the version, causing confusion all down the line.

Now I could have written a new TFS custom activity to do the job, or edited the existing one, but both options seemed a poor choice. We all know that custom activity writing is awkward and a pain to support going forward. So I decided to use the hooks in the 2013 generation build process template to just call a custom PowerShell script to do the job.

I added a PreBuildScript.PS1 file as a solution item to my solution.

I placed the following code in the file. It uses the TFS environment variables to get the build location and version; using these to find and edit the manifest files. The only gotcha is files on the build box are read only (it is a server workspace) so the manifest file has to be set it to allow it to be written back too.

# get the build number, we assume the format is Myproject.Main.CI_1.0.0.18290
# where the version is set using the TFSVersion custom build activity (see other posts)

$buildnum = $env:TF_BUILD_BUILDNUMBER.Split('_')[1]
# get the manifest file paths
$files = Get-ChildItem -Path $env:TF_BUILD_BUILDDIRECTORY -Filter "Package.appxmanifest" -Recurse
foreach ($filepath in $files)
{
    Write-Host "Updating the Store App Package '$filepath' to version ' $buildnum '"
   # update the identity value
  
$XMLfile=NEW-OBJECT XML
    $XMLfile.Load($filepath.Fullname)
    $XMLFile.Package.Identity.Version=$buildnum
   # set the file as read write
    Set-ItemProperty $filepath.Fullname -name IsReadOnly -value $false
    $XMLFile.save($filepath.Fullname)
}

Note that any output sent via Write-Host will only appear in the diagnostic log of TFS. If you use Write-Error (or errors are thrown) these messages will appear in the build summary, but the build will not fail, but will be marked as a partial success.

Once this file was checked in i was able to reference the file in the build template

image

The build could not be run and got my Windows 8 Store packages with the required version number

Swapping the the Word template in a VSTO project

We have recently swapped the Word template we use to make sure all our proposals and other documents are consistent. The changes are all cosmetic, fonts, footers etc. to match our new website; it still makes use of the same VSTO automation to do much of the work. The problem was I needed to swap the .DOTX file within the VSTO Word Add-in project, we had not been editing the old DOTX template in the project, but had created a new one based on a copy outside of Visual Studio.

To swap in the new .DOTX file for the VSTO project I had to...

  • Copy the new TEMPLATE2014.DOTX file to project folder
  • Opened the VSTO Word add-in .CSPROJ file on a text editor and replaced all the occurrences of the old template name with the new e.g. TEMPLATE.DOTX for TEMPLATE2014.DOTX
  • Reload the project in Visual Studio 2013, should be no errors and the new template is listed in place of the old
  • However, when I tried to compile the project I got a DocumentAlreadyCustomizedException. I did not know, but the template in the VSTO project needs to a copy with no association with any VSTO automation. The automation links are applied during the build process, makes sense when you think about it. As we had edited a copy of our old template, outside of Visual Studio, our copy already had the old automation links embedded. These needed to be removed, the fix was to
    • Open the .DOTX file in Word
    • On the File menu > Info > Right click on Properties (top right) to get the advanced list of properties

      image
    • Delete the _AssemblyName and _AssemblyLocation custom properties
    • Save the template
    • Open the VSTO project in Visual Studio and the you should be able to build the project
  • The only other thing I had to do was make sure my VSTO project was the start-up project for the solution. Once this was done I could F5/Debug the template VSTO combination