But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

How long is my TFS 2010 to 2013 upgrade going to take – Part 2

Back in January I did a post How long is my TFS 2010 to 2013 upgrade going to take? I have now done some more work with one of the clients and have more data. Specially the initial trial was 2010 > 2013 RTM on a single tier test VM; we have now done a test upgrade from 2010 > 2013.2 on the same VM and also one to a production quality dual tier system.

image

The key lessons are

  • There a 150 more steps to go from 2013 RTM to 2013.2, it takes a good deal longer.
  • The dual tier production hardware is nearly twice as fast to do the upgrade, though the initial steps (step 31, moving the source code) is not that much faster. It is the steps after this that are faster. We put it down to far better SQL throughput.

Cloning tfs repository with git-tf gives a "a server path must be absolute"

I am currently involved in moving some TFS TFVC hosted source to a TFS Git repository.  The first step was to clone the source for a team project from TFS using the command

git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection ‘$My Project’ localrepo1

and it worked fine. However the next project I tried to move had no space in the source path

git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection ‘$MyProject’ localrepo2

This gave the error

git-tf: A server path must be absolute.

Turns out if the problem was the single quotes. Remove these and the command worked as expected

git tf clone --deep http://tfsserver01:8080/tfs/defaultcollection $MyProject localrepo2

Seems you should only use the quotes when there are spaces in a path name.

Updated 11 June – After a bit more thought I think I have tracked down the true cause. It is not actually the single quote, but the fact the command line had been cut and pasted from Word. This mean the quote was a ‘ not a '. Cutting and pasting from Word can always lead to similar problems, but it is still a strange error message, I would have expected an invalid character message

Our TFS Lab Management Infrastructure

After my session Techorama last week I have been asked some questions over how we built our TFS Lab Management infrastructure. Well here is a bit more detail, thanks to Rik for helping correcting what I had misremembered and providing much of the detail.

image

For SQL we have two physical servers with Intel processors. Each has a pair of mirrored disks for the OS and RAID5 group of disks for data. We use SQL 2012 Enterprise Always On for replication to keep the DBs in sync. The servers are part of a Windows cluster (needed for Always On) and we use a VM to give a third server in the witness role. This is hosted on a production Hyper-V cloud. We have a number of availability groups on this platform, basically one per service we run. This allows us to split the read/write load between the two servers (unless they have failed over to a single box). If we had only one availability group for all the DBs  one node would being all the read/write and the other read only, so not that balanced.

SCVMM runs on a physical server with a pair of hardware-mirrored 2Tb disks for 2Tb of storage. That’s split into two partitions, as you can’t use data de-duplication on the OS volume of Windows. This allows us to have something like 5Tb of Lab VM images stored on the SCVMM library share that’s hosted on the SCVMM server. This share is for lab management use only.

We also have two physical servers that make up a Windows Cluster with a Cluster Shared Volume on an iSCSI SANs. This hosts a number of SCVMM libraries for ISO Images, Production VM Images and test stuff. Data de-duplication again is giving us an 80% space saving on the SAN (ISO images of OS’ and VHDs of installed OS’ dedupe _really_ well)

Our Lab cloud currently has three AMD based servers. They use the same disk setup as the SQL boxes, with a mirrored pair for OS and RAID5 for VM storage.

Our Production Hyper-V also has three servers, but this time in a Windows Cluster using a Cluster Shared Volume on our other iSCSI SAN for VM storage so it can do automated failover of VMs.

Each of the SQL servers, SCVMM servers and Lab Hyper-V servers uses Windows Server 2012 R2 NIC teaming to combine 2 x 1Gbit NICs which gives us better throughput and failover. The lab servers have one team for VM traffic and one team for the hyper-v management that is used when deploying VMs. That means we can push VMs around as fast as the disks will push data in either direction, pretty much, without needed expensive 10Gbit Ethernet.

So I hope that answers any questions.

Getting ‘The build directory of the test run either does not exist or access permission is required’ error when trying to run tests as part of the Release Management deployment

Whilst running tests as part of a Release Management deployment I started seeing the error ‘The build directory of the test run either does not exist or access permission is required’, and hence all my tests failed. It seems that there are issues that can cause this problem, as mentioned in the comments in Martin Hinshelwood’s post on running tests in deployment, specially spaces in the build name can cause this problem, but this was not the case for me.

Strangest point was it used to work, what had I changed?

To debug the problem I logged into the test VM as the account the deployment service was running as (a shadow account as the environment was network isolated). I got the command line that the component was trying to run by looking at the messages in the deployment log

image

I then went to the deployment folder on the test VM

%appdata%\local\temp\releasemanagement\[the release management component name]\[release number]

and ran the same command line. Strange thing was this worked! All the tests ran and passed OK, TFS was updated, everything was good.

It seemed I only had an issue when triggering the tests via a Release Management deployment, very strange!

A side note here, when I say the script ran OK it did report an error and did not export and unpack the test results from the TRX file to pass back to the console/release management log. Turns out this is because the MTMExec.ps1 script uses the command [System.IO.File]::Exist(..) to check if the .TRX file has been produced. This fails when the script is run manually. This is because it relies on [Environment]::CurrentDirectory, which is not set the same way when run manually as when a script is called by the deployment service. When run manually it seems to default to c:\windows\system32 not the current folder.

If you are editing this script, and want it to work in both scenarios, then probably best to use the PowerShell Test-Path(..) cmdlet as opposed to [System.IO.File]::Exist(..) 

So where to look for this problem, the error says something can’t access the drops location, but what?

A bit of thought as to who is doing what can help here

image

When the deployment calls for a test to be run

  • The Release Management deployment agent pulls the component down to the test VM from the Release Management Server
  • It then runs the Powershell Script
  • The PowerShell script runs TCM.exe to trigger the test run, passing in the credentials to access the TFS server and Test Controller
  • The Test Controller triggers the tests to be run on the Test Agent, providing it with the required DLLs from the TFS drops location – THIS IS THE STEP WITH THE PROBLEM IS SEEN
  • The Test Agent runs the tests and passes the results back to TFS via the Test Controller
  • After the PowerShell script triggers the test run it loops until the test run is complete.
  • It then uses TCM again to extract the test results, which it parses and passes back to the Release Management server

So a good few places to check the logs.

Turns out the error was being reported on the Test Controller.

image

(QTController.exe, PID 1208, Thread 14) Could not use lab service account to access the build directory. Failure: Network path does not exist or is not accessible using following user: \\store\drops\Sabs.Main.CI\Sabs.Main.CI_2.3.58.11938\ using blackmarble\tfslab. Error Code: 53

The error told me the folder and who couldn’t access it, the domain service account ‘tfslab’ the Test Agents use to talk back to the Test Controller.

I checked the drops location share and this user has adequate access rights. I even logged on to the Test Controller as this user and confirmed I could open the share.

I then had a thought, this was the account the Test Agents were using to communicate with the Test Controller, but was it the account the controller was running as? A check showed it was not, the controller was running as the default ‘Local System’. As soon as I swapped to using the lab service account (or I think any domain account with suitable rights) it all started to work.

image

So why did this problem occur?

All I can think of was that (to address another issue with Windows 8.1 Coded-UI testing) the Test Controller was upgraded to 2013.2RC, but the Test Agent in this lab environment was still at 2013RTM. Maybe the mismatch is the issue?

I may revisit and retest with the ‘Local System’ account when 2013.2 RTM’s and I upgrade all the controllers and agents, but I doubt it. I have no issue running the test controller as a domain account.

Setting the LocalSQLServer connection string in web deploy

If you are using Webdeploy you might wish to alter the connection string the for the LocalSQLServer that is used by the ASP.NET provider for web part personalisation. The default is to use ASPNETDB.mdf in the APP_Data folder, but in a production system you could well want to use a ‘real’ SQL server.

If you look in your web config, assuming you are not using the default ‘not set’ setting, will look something like

<connectionStrings>
    <clear />
    <add name="LocalSQLServer" connectionString="Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" providerName="System.Data.SqlClient" />
  </connectionStrings>

Usually you expect any connection strings in the web.config to appear in the Web Deploy publish wizard, but it does not. I have no real idea why, but maybe it is something to do with having to use <clear /> to remove the default?

image

If you use a parameters.xml file to add parameters to the web deploy you would think you could add the block

<parameter name="LocalSQLServer" description="Please enter the ASP.NET DB path" defaultvalue="__LocalSQLServer__" tags="">
  <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add[@name='LocalSQLServer']/@connectionString" />
</parameter>

However, this does not work, in the setparameters.xml that is generated you find  two entries, first yours then the auto-generated one, and the last one wins, so you don’t get the correct connection string.

<setParameter name="LocalSQLServer" value="__LocalSQLServer__" />
<setParameter name="LocalSQLServer-Web.config Connection String" value="Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" />

The solution I found manually add your parameter in the parameters.xml file as

<parameter name="LocalSQLServer-Web.config Connection String" description="LocalSQLServer Connection String used in web.config by the application to access the database." defaultValue="__LocalSQLServer__" tags="SqlConnectionString">
  <parameterEntry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add[@name='LocalSQLServer']/@connectionString" />
</parameter>

With this form the connection string was correctly modified as only one entry appears in the generated file

Changing WCF bindings for MSDeploy packages when using Release Management

Colin Dembovsky’s excellent post ‘WebDeploy and Release Management – The Proper Way’ explains how to pass parameters from Release Management into MSDeploy to update Web.config files. On the system I am working on I also need to do some further web.config translation, basically the WCF section is different on a Lab or Production build as it needs to use Kerberos, whereas local debug builds don’t.

In the past I dealt with this, and editing the AppSettings, using MSDeploy web.config translation. This worked fine, but it meant I built the product three time, exactly what Colin’s post is trying to avoid. The techniques in the post for the AppSettings and connection strings are fine, but don’t apply so well for the large block swapouts, as I need for WCF bindings section.

I was considering my options when I realised there a simple option.

  • My default web.config has the bindings for local operation i.e. no Kerberos
  • The web.debug.config translation hence does nothing
  • Both web.lab.config and web.release.confing translations have Kerberos bindings swapped out

So all I needed to do was build the Release build (as you would for production release anyway) this will have the correct bindings in the MSDeploy package for both Lab and Release. You can then use Release Management to set the AppSettings and connection strings as required.

Simple, no extra handling required. I had thought my self into a problem I did not really have.

Release Management components fail to deploy with a timeout if a variable is changed from standard to encrypted

I have been using Release Management to update some of our internal deployment processes. This has included changing the way we roll out MSDeploy packages; I am following Colin Dembovsky’s excellent post of the subject.

I hit an interesting issue today. One of the configuration variable parameters I was passing into a component was a password field. For my initial tests had just let this be a clear text ‘standard’ string in the Release Management. Once I got this all working I thought I better switch this variable to ‘encrypted’, so I just change the type on the Configuration Variables tab.

image 

On doing this I was warned that previous deployment would not be re-deployable, but that was OK for me, it was just a trial system. I would not be going back to older versions.

However when I tried to run this revised release template all the steps up to the edited MSDeploy step were fine, but the MSDeploy step never ran it just timed out. The component was never deployed to the target machine %appdata%\local\temp\releasemanagement folder.

image

In the end, after a few reboots to confirm the comms were OK, I just re-added the component to the release template and entered all the variables again. It then deployed without a problem.

I think this is a case of a misleading error message.

‘Windows Phone 8.1 Update’ update

I have been running Windows Phone 8.1 Update for a couple of weeks now and have to say I like. I have not suffered the poor battery life others seem to have suffered. Maybe this is an feature of the Nokia 820 no needing as many firmware updates from Nokia (which aren't available yet) note having such power hungry features as the larger phones.

The only issue I have had is that I lost an audio channel when using a headset. Initially I was unsure if it was a mechanical fault on the headphone socket, but I checked the headset was good, it sounded as if the balance was faded to just one side as you could just hear something faint on the failing side. Anyway as often is the case in IT, a reboot of the phone fixed the issue.

Where has my picture password login sign in gone on Windows 8?

I have had a Surface 2 for about six months. It is great for watching videos on the train, or a bit of browsing, but don’t like it for note taking in meetings. This is a shame, as this is what I got it for; a light device with good battery life to take to meetings. What I needed was something I could hand write on in OneNote, an electronic pad. The Surface 2 touch screen is just not accurate enough.

After Rik’s glowing review I have just got a Dell Venue 8 Pro and stylus. I setup the Dell with a picture password and all was OK for a while, I could login via a typed password or a picture as you would expect. However the picture password sign-in option disappeared from the lock/login screen at some point after running the numerous update and application installation I needed.

I am not 100% certain, but I think the issue is that when I configured the Windows 8 Mail application to talk to our company Exchange server I was asked to accept some security settings from our domain. I think this blocked picture password for non-domain joined devices. I joined the Dell to our domain (you can do this as it is Atom not ARM based , assuming you re willing to do a reinstall with Windows 8 Pro) and this seems to have fixed my problem. I have installed all the same patches and apps and I still have the picture password option.

So roll on the next meeting to see if I can take reasonable hand written notes on it, and that OneNote desktop manages to get them converted to text.

Handling .pubxml files with TFS MSBuild arguments

With Visual Studio 2012 there were changes in the way Web Publishing worked; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.

Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the /p:DeployOnBuild=True MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.

This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.

You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish

image

To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument /p:DeployOnBuild=True, now we would use /p:DeployOnBuild=True;PublishProfile=MyProfile, where there is a .pubxml file in the path

[Project]/properties/PublishProfiles/MyProfile.pubxml

Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the [your package name].SetParameters.xml file, which you can alter with whichever transform technology you wish to use e.g. SlowCheetah or Release Management workflows.

One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <DesktopBuildPackageLocation>. This will be the output path you used when you created the publish profile using the wizard in Visual Studio.

If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument /p:DesktopBuildPackageLocation="". 

You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.