But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Where has my picture password login sign in gone on Windows 8?

I have had a Surface 2 for about six months. It is great for watching videos on the train, or a bit of browsing, but don’t like it for note taking in meetings. This is a shame, as this is what I got it for; a light device with good battery life to take to meetings. What I needed was something I could hand write on in OneNote, an electronic pad. The Surface 2 touch screen is just not accurate enough.

After Rik’s glowing review I have just got a Dell Venue 8 Pro and stylus. I setup the Dell with a picture password and all was OK for a while, I could login via a typed password or a picture as you would expect. However the picture password sign-in option disappeared from the lock/login screen at some point after running the numerous update and application installation I needed.

I am not 100% certain, but I think the issue is that when I configured the Windows 8 Mail application to talk to our company Exchange server I was asked to accept some security settings from our domain. I think this blocked picture password for non-domain joined devices. I joined the Dell to our domain (you can do this as it is Atom not ARM based , assuming you re willing to do a reinstall with Windows 8 Pro) and this seems to have fixed my problem. I have installed all the same patches and apps and I still have the picture password option.

So roll on the next meeting to see if I can take reasonable hand written notes on it, and that OneNote desktop manages to get them converted to text.

Handling .pubxml files with TFS MSBuild arguments

With Visual Studio 2012 there were changes in the way Web Publishing worked; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.

Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the /p:DeployOnBuild=True MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.

This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.

You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish

image

To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument /p:DeployOnBuild=True, now we would use /p:DeployOnBuild=True;PublishProfile=MyProfile, where there is a .pubxml file in the path

[Project]/properties/PublishProfiles/MyProfile.pubxml

Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the [your package name].SetParameters.xml file, which you can alter with whichever transform technology you wish to use e.g. SlowCheetah or Release Management workflows.

One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <DesktopBuildPackageLocation>. This will be the output path you used when you created the publish profile using the wizard in Visual Studio.

If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument /p:DesktopBuildPackageLocation="". 

You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.

Fix for ‘Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid’ errors on TFS 2013.2 build

When working with web applications we tend to use MSDeploy for distribution. Our TFS build box, as well as producing a _PublishedWebsite copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager

To create this package we add the MSBuild Arguments /p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True 

image

This was been working fine, until I upgraded our TFS build system to 2013.2. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets (3883): Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid.)

If I removed the /p:DeployOnBuild=true argument, the build was fine, just no ZIP package was created.

After a bit of thought I realised that I had also upgraded my PC to 2013.2 RC, the publish options for a web project are more extensive, giving more options for Azure.

So I assumed the issue was a mismatch between MSBuild and target files, missing these new options. So I replaced the contents of C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web on my build box with the version from my upgraded development PC and my build started working again.

Seems there are some extra parameters set in the newer version of the build targets. Lets see if it changes again when Visual Studio 2013.2 RTMs.

Getting started with Release Management with network isolated Lab Management environments

Our testing environments are based on TFS Lab Management, historically we have managed deployment into them manually (or at least via a PowerShell script run manually) or using TFS Build. I thought it time I at least tried to move over to Release Management.

The process to install the components of Release Management is fairly straight forward, there are wizards that ask little other than which account to run as

  • Install the deployment server, pointing at a SQL instance
  • Install the management client, pointing at the deployment server
  • Install the deployment agent on each box you wish to deploy to, again pointing it as the deployment server

I hit a problem with the third step. Our lab environments are usually network isolated, hence each can potentially be running their own copies of the same domain. This means the connection from the deployment agent to the deployment server is cross domain. We don’t want to setup cross domain trusts as

  1. we don’t want cross domain trusts, they are a pain to manage
  2. as we have multiple copies of environments there are more than one copy of some domains – all very confusing for cross domain trusts

So this means you have to use shadow accounts, as detailed in MSDN for Release Management, the key with this process is to make sure you manually add the accounts in Release Management (step 2) – I missed this at first as it differs from what you usually need to do.

To resolve this issue, add the Service User accounts in Release Management. To do this, follow these steps:

    1. Create a Service User account for each deployment agent in Release Management. For example, create the following:

      Server1\LocalAccount1
      Server2\LocalAccount1
      Server3\LocalAccount1

    2. Create an account in Release Management, and then assign to that account the Service User and Release Manager user rights. For example, create Release_Management_server\LocalAccount1.
    3. Run the deployment agent configuration on each deployment computer.

However I still had a problem, I entered the correct details in the deployment  configuration client, but got the error

image

The logs showed

Received Exception : Microsoft.TeamFoundation.Release.CommonConfiguration.ConfigurationException: Failed to validate Release Management Server for Team Foundation Server 2013.
   at Microsoft.TeamFoundation.Release.CommonConfiguration.DeployerConfigurationManager.ValidateServerUrl()
   at Microsoft.TeamFoundation.Release.CommonConfiguration.DeployerConfigurationManager.ValidateAndConfigure(DeployerConfigUpdatePack updatePack, DelegateStatusUpdate statusListener)
   at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)

A quick look using WireShark showed it was try to access http://releasemgmt.mydomain.com:1000/configurationservice.asmx,if I tried to access this in a browser it showed

 

Getting

Server Error in '/' Application.
--------------------------------------------------------------------------------

Request format is unrecognized.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Web.HttpException: Request format is unrecognized.

Turns out the issue was I needed to be run the deployment client configuration tool as the shadow user account, not as any other local administrator.

Once I did this the configuration worked and the management console could scan for the client. So now I can really start to play…

A better way of using TFS Community Build Extensions StyleCop activity so it can use multiple rulesets

Background

The TFS Community Build Extensions provide many activities to enhance your build. One we use a lot is the one for StyleCop to enforce code consistency in projects as part of our check in & build process.

In most projects you will not want a single set of StyleCop rules to be applied across the whole solution. Most teams will require a higher level of ‘rule adherence’ for production code as opposed to unit test code. By this I don’t mean the test code is ‘lower quality’, just that rules will differ e.g. we don’t require XML documentation blocks on unit test methods as the unit test method names should be documentation enough.

This means each of our projects in a solution may have their own StyleCop settings file. With Visual Studio these are found and used by the StyleCop runner without an issue.

However, on our TFS build boxes what we found that when we told it to build a solution, the StyleCop settings file in the same folder as the solution file was used for the whole solution. This means we saw a lot of false violations, such as unit test with no documentation headers.

The workaround we have used to not tell the TFS build to build a solution, but to build each project individually (in the correct order). By doing this the StyleCop settings file in the project folder is picked up. This is an OK solution, but does mean you need to remember to add new projects and remove old ones as the solution matures. Easy to forget.

Why is it like this?

Because of this on our engineering backlog we have had a task to update the StyleCop task so it did not use the settings file from root solution/project folder (or any single named settings file you specified).

I eventually got around to this, mostly due to new solutions being starting that I knew would contain many projects and potentially had a more complex structure than I wanted to manage by hand within the  build process.

The issue is the in the activity a StyleCop console application object is created and run. This takes a single settings file and a list of .CS files as parameters. So if you want multiple settings files you need to create multiple StyleCop console application objects.

Not a problem I thought, nothing adding a couple of activity arguments and a foreach loop can’t fix. I even got as far as testing the logic in a unit test harness, far easier than debugging in a TFS build itself.

It was then I realised the real problem, it was the StyleCop build activity documentation, and I only have myself  to blame here as I wrote it!

The documentation suggests a way to use the activity

  1. Find the .sln or .csproj folder
  2. From this folder load a settings.stylecop file
  3. Find all the .CS files under this location
  4. Run StyleCop

It does not have to be this way, you don’t need to edit the StyleCop activity, just put in a different workflow

A better workflow?

The key is finding the setting files, not the solution or project files. So if we assume we are building a single solution we can use the following workflow

image

Using the base path of the .sln file do a recursive search for all *.stylecop files



Loop on this set of .stylecop files


For each one do a recursive search for .cs files under its location


Run StyleCop for this settings file against the source files below it.

 

This solution seems to work, you might get some files scanned twice if you have nested settings files, but that is not a issue for us as we place a StyleCop settings file with each projects. We alter the rules in each of these files as needed, from full sets to empty rulesets of we want StyleCop to skip the project.

So now I have it working internally it is now time go and update the TFS Community Build Extensions Documentation

High CPU utilisation on the data tier after a TFS 2010 to 2013 upgrade

There have been significant changes in the DB schema between TFS 2010 and 2013. This means that as part of an in-place upgrade process a good deal of data needs to be moved around. Some of this is done as part of the actual upgrade process, but to get you up and running quicker, some is done post upgrade using SQL SPROCs. Depending how much data there is to move this can take a while, maybe many hours. This is the cause the SQL load.

A key factor as to how long this takes is the size of your pre upgrade tbl_attachmentContent table, this is where amongst other things test attachments are stored. So if you have a lot of test attachments it will take a while as these are moved to their new home in tbl_content.

If you want to minimise the time this takes it can be a good idea to remove any unwanted test attachments prior to doing the upgrade. This is done with the test attachment cleaner from the appropriate version of TFS Power Tools for your TFS server. However beware that if you don’t have a suitably patched SQL server there can be issues with ghost files (see Terje’s post).

If you cannot patch your SQL to a suitable version to avoid this problem then it is best to clean our old test attachments only after the while TFS migrate has completed i.e. wait until the high SQL CPU utilisation caused by the SPROC based migration has completed. You don’t want to be trying to clean out old test attachments at the same time TFS is trying to migrate them.

Upgraded older Build and Test Controllers to TFS 2013

All has been going well since our upgrade from TFS 2012 to 2013, no nasty surprises.

As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.

So we created a new pair of Windows 2012R2 domain joined server VMs, on  one we installed a test controller and on the build controller and a single build agent

Note: I always tend to favour a single build agent per VM, usually using a single core VM. I tend to find most builds are IO locked not CPU locked so having more smaller VMs, I think, tends to be easier to manage at the VM hosting resource level.

Test Controller

Most of the use of our Test Controller is as part of our TFS Lab Management environments. If you load MTM 2013 you will see that it cannot manage 2012 Test Controller, they appear as off line. Lab Management is meant to keep the test agents upgraded, so it should upgrade an agent from one point release to another e.g. 2012.3 to 2012.4. However, this upgrade feature does not extend to 2012 to 2013 major release upgrades. Also I have always found the automated deployment/upgrade of test agents as part of an environment deployment is problematic at best. You often seem to suffer DNS and timeout issues. Easily the most reliable method is to make sure the correct (or at least compatible) test agents are installed on all the environment VMs prior to their configuration at the deployment/restart stage.

Given this the system that seems to work for getting environment’s test agents talking to the new 2013 Test Controller is:

  1. In MTM stop the environment
  2. Open the environment settings and change the controller to the new one
  3. Restart the environment, you will see the VMs show as not ready. The test agents won’t configure.
  4. Connect to each VM
    1. Uninstall the 2012 Test Agent
    2. install the 2013 Test Agent
  5. Stop and restart the environment and all should work – with luck the VMs will configure properly and show as ready
  6. If the don’t
    1. Try a second restart, sometimes sorts it
    2. You can try a repair, re-entering the various password.
    3. Updated 5 Feb 2014 Found I always need to do this. If problem really persist try running  Run the Test Agent Configuration tool on each VM, press next,next, next etc. and it will try to configure. It will probably fail, but hopefully it will have done enough port opening etc. to allow the next environment restart to work correctly
    4. If it still fails you need to check the logs, but suspect a DNS issue.

Obvious you could move step 4 to the start if you make the fair assumption it is going to need manual intervention

Build Controller

Swapping your build over from 2012 to 2013 will have site specific issues. It all depends what build activities you are using. If they are bound to TFS 2012 API they may not work unless you rebuild them. However from my first tests I have found my Build 2012 processes template seem to work I, this is whether I set my  build controller ‘custom assemblies path’ to either my 2012 DLL versions or their 2013 equivalents. So .NET is managing to resolve usable DLLs to get the build working.

Obviously there is still more to do here, checking all my custom build assemblies, maybe a look at revising the whole build scripts to make use to 2013 features, but that can wait.

What I have now allows me to upgrade our Windows 8.1 build agent VM so it can connect to our 2013 Build Controller. Thus allowing use to run full automated builds and tests of Windows 8.1 application. Up to now with TFS 2012 we had only been able to the basic build working due to having to hack i the build process as you need Visual Studio 2013 generation tools to fully build and test Windows 8.1 applications.

 

So we are going to have 2012 build and test controllers around for  while, but we have provided the migration is not going to be too bad. Maybe just needs a bit of thought over some custom build assemblies.

How long is my TFS 2010 to 2013 upgrade going to take?

I seem to be involved with a number of TFS 2010 to 2013 upgrades at present. I suppose people are looking at TFS 2013 in the same way as they have historically looked at the first service pack for a product i.e: the time to upgrade when most of the main issues are addressed. That said TFS 2013 is not TFS 2012 SP1!

A common question is how long will the process take to upgrade each Team Project Collection? The answer is that it depends, a good consultants answer. Factors include the number of work items, size of the code base, number of changesets, volume of test results and the list goes on.  The best I have been able to come up with is to record some timings of previous upgrades and use this data to make an educated guess. 

In an upgrade of a TPC from TFS 2010 to 2013 there are 793 steps to be taken. Not all these take the same length of time, some are very slow as can be seen in the chart. I have plotted the points where the upgrade seems to pause the longest. These are mostly towards the start of the process where I assume the main  DB schema changes are being made

image

To give some more context

  • Client C was a production quality multi tier setup and took about 3 hours to complete.
  • Client L, though with a a similar sized DB to Server A, was much slower to upgrade, around 9 hours. However, it was on a slower single tier test VM and also had a lot of historic test data attachments (70%+ of the DB contents)
  • Demo VM was my demo/test TFS 2010 VM, this had 4 TPCs, the timing are for the largest of 600Mb. In reality this server had little ‘real’ data. It is also interesting to note that though there were four TPCs the upgrade did three in parallel and when the first finished started the fourth. Worth remembering if you are planning an upgrade of many TPCs.

Given this chart, if you know how long it takes to get to Step 30 of 793 you can get an idea of which of these lines closest matches your system.

I will continue to update this post as I get more sample data, hope it will be of use to others to gauge how only upgrades may take, but remember your mileage may vary.

Fix for intermittent connection problem in lab management – restart the test controller

Just had a problem with a TFS 2012 Lab Management deployment build. It was working this morning, deploying two web sites via MSDeploy and a DB via a DacPac, then running some CodedUI tests. However, when I tried a new deployment this afternoon it kept failing with the error:

The deployment task was aborted because there was a connection failure between the test controller and the test agent.

image

If you watched the build deployment via MTM you could see it start OK, then the agent went off line after a few seconds.

Turns out the solution was the old favourite, to reboot of the Build Controller. Would like to know why it was giving this intermittent problem though.

Update 14th Jan An alternative solution to rebooting is to add a hosts file entry on the VM running the test agent for the IP address of the test controller. Seems the problem is name resolution, but not sure why it occurs