But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Why is my TFS Lab build picking a really old build to deploy?

I was working on a TFS lab deployment today and to speed up testing I set it to pick the <Latest> build of the TFS build that actually builds the code, as opposed to queuing a new one each time. It took me ages to remember/realise why it kept trying to deploy some ancient build that had long since been deleted from my test system.

image

The reason was the <Latest> build means the last successful build, not the last partially successful build. I had a problem with my code build that means a test was failing (a custom build activity on my build agent was the root cause if the issue). Once I fixed my build box so the code build did not fail the lab build was able to pickup the files I expected and deploy them

Note that if you tell the lab deploy build to queue it’s own build it all attempt to deploy this even if it is partially successful.

Fix for ‘Cannot install test agent on these machines because another environment is being created using the same machines’

I recently posted about adding extra VMs to existing environments in Lab Management. Whilst following this process I hit a problem, I could not create my new environment there was a problem at the verify stage. It was fine adding the new VMs, but the one I was reusing gave the error ‘Microsoft test manager cannot install test agent on these machines because another environment is being created using the same machines’

image

II had seen this issue before and so I tried a variety of things that had sorted it in the past, removing the TFS Agent on the VM, manually installing and trying to configure them, reading through the Test Controller logs, all to no effect. I eventually got a solution today with the help of Microsoft.

The answer was to do the following on the VM showing the problem

  1. Kill TestAgentInstaller.exe process, if running on failing machine
  2. Delete “TestAgentInstaller” service from services, using sc delete testagentinstaller command (gotcha here, use a DOS style command prompt not PowerShell as sc has a different default meaning in PowerShell, it is an alias for set-content. if using PowerShell you need the full path to the sc.exe)
  3. Delete c:\Windows\VSTLM_RES folder
  4. Restart machine and then try Lab Environment creation again and all should be OK
  5. As usual once the environment is created you might need to restart it to get all the test agents to link up to the controller OK

So it seems that the removal of the VM from its old environment left some debris that was confusing the verify step. Seems this only happens rarely but can be a bit of a show stopper if you can’t get around it

Making the drops location for a TFS build match the assembly version number

A couple of years ago I wrote about using the TFSVersion build activity to try to sync the assembly and build number. I did not want to see build names/drop location in the format 'BuildCustomisation_20110927.17’, I wanted to see the version number in the build something like  'BuildCustomisation 4.5.269.17'. The problem as I outlined in that post was that by fiddling with the BuildNumberFormat you could easily cause an error where duplicated drop folder names were generated, such as

TF42064: The build number 'BuildCustomisation_20110927.17 (4.5.269.17)' already exists for build definition '\MSF Agile\BuildCustomisation'.

I had put this problem aside, thinking there was no way around the issue, until I was recently reviewing the new ALM Rangers ‘Test Infrastructure Guidance’. This had a solution to the problem included in the first hands on lab. The trick is that you need to use the TFSVersion community extension twice in you build.

  • You use it as normal to set the version of your assemblies after you have got the files into the build workspace, just as the wiki documentation shows
  • But you also call it in ‘get mode’ at the start of the build process prior to calling the ‘Update Build Number ‘ activity. The core issue being you cannot call ‘Update Build Number’ more than once else you tend to see the TF42064 issues. By using it in this manner you will set the BuildNumberFomat to the actual version number you want, which will be used for the drops folder and any assembly versioning.

So what do you need to do?

  1. Open you process template for editing (see the custom build activities documentation if you don’t know how to do this)
  2. Find the sequence ‘ Update Build Number for Triggered Builds’ and at the top of the process template

    image
    • Add TFSVersion activity – I called mine ‘Generate Version number for drop’
    • Add an Assign activity – I called mine ‘Set new BuildNumberFormat’
    • Add a WriteBuildMessage activity – This is option but I do like to see what it generated
  3. Add a string variable GeneratedBuildNumber with the scope of ‘Update Build Number for Triggered Builds’

    image
  4. The properties for the TFSVersion activity should be set as shown below

    image
    • The Action is the key setting, this needs to be set to GetVersion, we only need to generate a version number not set any file versions
    • You need to set the Major, Minor and StartDate settings to match the other copy of the activity in your build process. I good tip is to just cut and paste from the other instance to create this one, so that the bulk of the properties are correct
    • The Version needs to be set to you variable GeneratedBuildNumber this is the outputed version value
  5. The properties for the Assign activities are as follows

    image
    • Set To to BuildNumberFormat
    • Set Value to String.Format("$(BuildDefinitionName)_{0}", GeneratedBuildNumber), you can vary this format to meet your own needs [updated 31 Jul 13 - better to use an _ rarther than a space as this will be used in the drop path)
  6. I also added a WriteMessage activity that outputs the generated build value, but that is optional

Once all this was done and saved back to TFS it could be used for a build. You now see that the build name, and drops location is in the form

[Build name] [Major].[Minor].[Days since start date].[TFS build number]

image

This is a slight change from what I previously attempted where the 4th block was the count of builds of a given type on a day, now it is the unique TFS generate build number, the number shown before the build name is generated. I am happy with that. My key aim is reached that the drops location contains the product version number so it is easy to relate a build to a given version without digging into the build reports.

I can never remember the command line to add use to the TFS Service Accounts group

I keep forgetting when you use TFS Integration Platform that the user who the tool (or service account is running as a service) is running as has to be in the “Team Foundation Service Accounts” group on the TFS servers involved. If they are not you get a runtime conflict something like

Microsoft.TeamFoundation.Migration.Tfs2010WitAdapter.PermissionException: TFS WIT bypass-rule submission is enabled. However, the migration service account 'Richard Fennell' is not in the Service Accounts Group on server 'http://tfsserver:8080/tfs'.

The easiest way to do this is to use the TFSSecurity command line tool on the TFS server. Now you will find some older blog posts about setting the user as a TFS admin console user to get the same effect, but this only seems to work on TFS 2010. This command is good for all versions

C:\Program Files\Microsoft Team Foundation Server 12.0\tools> .\TFSSecurity.exe /g+ "Team Foundation Service Accounts
" n:mydomain\richard /server:http://localhost:8080/tfs

and expect to see

Microsoft (R) TFSSecurity - Team Foundation Server Security Tool
Copyright (c) Microsoft Corporation.  All rights reserved.

The target Team Foundation Server is http://localhost:8080/tfs.
Resolving identity "Team Foundation Service Accounts"...
s [A] [TEAM FOUNDATION]\Team Foundation Service Accounts
Resolving identity "n:mydomain\richard"...
  [U] mydomain\Richard
Adding Richard to [TEAM FOUNDATION]\Team Foundation Service Accounts...
Verifying...

SID: S-1-9-1551374245-1204400969-2333986413-2179408616-0-0-0-0-2

DN:

Identity type: Team Foundation Server application group
   Group type: ServiceApplicationGroup
Project scope: Server scope
Display name: [TEAM FOUNDATION]\Team Foundation Service Accounts
  Description: Members of this group have service-level permissions for the Team Foundation Application Instance. For se
rvice accounts only.

1 member(s):
  [U] mydomain\Richard

Member of 2 group(s):
e [A] [TEAM FOUNDATION]\Team Foundation Valid Users
s [A] [DefaultCollection]\Project Collection Service Accounts

Done.

Once this is done, and the integration platform run restarted all should be OK

Adding another VM to a running Lab Management environment

If you are using network isolated environment in TFS Lab management there is no way to add another VM unless you rebuild and redeploy the environment. However, if you are not network isolated you can at least avoid the redeploy issues to a degree.

I had a SCVMM based environment that was a not network isolated environment that contained a single non-domain joined server. This was used to host a backend simulation service for a project. In the next phase of the project we need to test accessing this service via RDP/Terminal Server so I wanted to add a VM to act in this role to the environment.

So firstly I deleted the environment in MTM, as the VMs in the environment are not network isolated they are not removed. The only change is to remove the XML meta data from the properties description.

I now needed to create my new VM. I had thought I could create a new environment adding the existing deployed and running VM as well as  a new one from the SCVMM library. However you get the error ‘ cannot create an environment consisting of both running and stored VMs’

image

So here you have two options.

  1. Store the running VM in the library and redeploy
  2. Deploy out, via SCVMM, a new VM from some template or stored VM

Once this is done you can create the new environment using the running VMs or stored images depending on the option chosen in the previous step.

So not any huge saving in time or effort. Just wish there was a way to edit deployed environments

Minor issue on TFS 2012.3 upgrade if you are using host headers in bindings

Yesterday I upgraded our production 2012.2 TFS server to update 3. All seemed to go OK and it completed with no errors, it was so much easier now that the update supports the use of SQL 2012 Availability Groups within the update process, no need to remove the DBs from the availability group prior to the update.

However, though there were no errors it did reported a warning, and on a quick check users could not connects to the upgraded server on our usually https URL.

On checking the update log I saw

[Warning@09:06:13.578] TF401145: The Team Foundation Server web application was previously configured with one or more bindings that have ports that are currently unavailable.  See the log for detailed information.
[Info   @09:06:13.578]
[Info   @09:06:13.578] +-+-+-+-+-| The following previously configured ports are not currently available... |+-+-+-+-+-
[Info   @09:06:13.584]
[Info   @09:06:13.584] 1          - Protocol          : https
[Info   @09:06:13.584]            - Host              : tfs.blackmarble.co.uk
[Info   @09:06:13.584]            - Port              : 443
[Info   @09:06:13.584] port: 443
[Info   @09:06:13.585] authMode: Windows
[Info   @09:06:13.585] authenticationProvider: Ntlm

The issue appears if you use host headers, as we do for our HTTPS bindings. The TFS configuration tool does not understand these, so sees more than one binding in our case on 443  (our TFS server VM also hosts as a nuget server on https 443, we use host headers to separate the traffic) . As the tool does not know what to do with host headers, it just deletes the bindings it does no understand.

Anyway the fix was to  manually reconfigured the HTTPS bindings in IIS and all was OK.

On checking with Microsoft it seems this is a know issue, and on their radar to sort out in future.

Setting SkyDrive as a trusted location in Office 2013

We use a VSTO based Word template to make sure all our documents have the same styling and are suitably reformatted for shipping to clients e.g revision comments removed, contents pages up to date etc. Normally we will create a new document using this template from our SharePoint server and all is OK. However sometimes you are on the road when you started a document so you just create it locally using a locally installed copy of the template. In the past this has not caused me problems. I have my local ‘My documents’ set in Word as a trusted location and it just works fine.

However, of late due to some SSD problems, I have taken to using the SkyDrive desktop application. So I now save to C:\Users\[username]\SkyDrive and this syncs up to my SkyDrive space whenever it gets a chance. It has certainly saved me a few times already (see older posts my my SSD failure adventures)

However, the problem is that I can create a new document OK, VSTO runs, and save it to my local SkyDrive folder, but then I come back to open it for editing I get the error

image

The problem is my SkyDrive folder is not in my trusted locations list (Word > options > trust center > trust center settings (button lower right) > Trust locations)

image

So I tried adding C:\Users\[username]\SkyDrive – it did not work.

I then noticed that when I load or save from SkyDrive dialogs say it is coping to ‘https://d.docs.live.net/[an unique id]’. So I entered https://d.docs.live.net (and its sub folders) as a trusted location and it worked.

Now I don’t really want to trust the whole of SkyDrive, so needed to find my SkyDrive ID. Now I am sure there is a easy way to do this but I don’t know it.

The solution I used was to

  1. Go to the browser version of SkyDrive.
  2. Pick a file
  3. Use the menu option ‘Embed’ to generate the HTML to embed the file
  4. From this URL extracted the CID
  5. Added this to the base URL so you get https://d.docs.live.net/12345678/
  6. Add this new URL to the trusted locations (with sub folders) and my VSTO application works

Simple wasn’t it.

Why can’t I find my build settings on a Git based project on TFS Service?

Just wasted a bit of time trying to find the build tab on a TFS Team Project hosted on the hosted http://tfs.visualstudio.com using a Git repository. I was looking on team explorer expecting to see something like

image

But all I was seeing the the Visual Studio Git Changes option (just the top bit on the left panel above).

It took to me ages to realise that the issue was I had cloned the Git repository to my local PC using the Visual Studio Tools for Git. So I was just using just Git tools, not TFS tools. As far as Visual Studio was concerned this was just some Git repository it could have been local, GitHub, TFS Service or anything that hosts Git.

To see the full features of TFS Service you need to connect to the service using Team Explorer (the green bits), not just as a Git client (the red bits)

image

Of course if you only need Git based source code management tools, just clone the repository and use the Git tooling, where inside or outside Visual Studio. The Git repository in TFS is just a standard Git repro so all tools should work. From the server end TFS does not care what client you use, in fact it will still associate you commits, irrespective of client, with TFS work items if you use the #1234 syntax for work item IDs in your comments.

However if you are using hosted TFS from Visual Studio, it probably makes more sense to use a Team Explorer connection so all the other TFS feature light up, such as build. The best bit is that all the Git tools are still there as Visual Studio knows it is still just a Git repository. Maybe doing this will be less confusing when I come to try to use a TFS feature!

Error adding a new widget to our BlogEngine.NET 2.8.0.0 server

Background

if you use Twitter in any web you will probably have noticed that they have switched off the 1.0 API, you have to use the 1.1 version which is stricter over OAUTH. This meant the Twitter feeds into our blog server stopped working on the 10th of June. The old call of

http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble

did not work and just change 1 to 1.1 did not work.

So I decided to pull down a different widget for BlogEngine.NET to do the job, choosing Recent Tweets.

The Problem

However when I tried to access our root/parent blog site and go onto the customisation page to add the new widget I got

Ooops! An unexpected error has occurred.

This one's down to me! Please accept my apologies for this - I'll see to it that the developer responsible for this happening is given 20 lashes (but only after he or she has fixed this problem).

Error Details:

Url : http://blogs.blackmarble.co.uk/blogs/admin/Extensions/default.cshtml
Raw Url : /blogs/admin/Extensions/default.cshtml
Message : Exception of type 'System.Web.HttpUnhandledException' was thrown.
Source : System.Web.WebPages
StackTrace : at System.Web.WebPages.WebPageHttpHandler.HandleError(Exception e)
at System.Web.WebPages.WebPageHttpHandler.ProcessRequestInternal(HttpContextBase httpContext)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
TargetSite : Boolean HandleError(System.Exception)
Message : Item has already been added. Key in dictionary: 'displayname' Key being added: 'displayname'

Looking at the discussion forums it seem be had some DB issues.

The Fix

I could see nobody with the same problems, so I pulled down the source code from Codeplex and had a look at the DBBlogProvider.cs (line 2350) where the error was reported. I think the issue is that when a blog site is set ‘Is for site aggregation’, as our root site where I needed to install the new widget is, the SQL query that generates the user profile list was not filtered by blog, so it saw duplicates.

I disabled ‘Is for site aggregation’ for our root blog and was then able to load the customisation page and add my widget.

Interestingly, I then switched back on ‘Is for site aggregation’ and all was still OK. I assume the act of opening the customisation page once fixes the problem.

Update: Turns out this is not the case, after a reboot of my client PC the error returned, must have been some caching that made it work

Also worth noting ….

In case you had not seen it, I hadn’t, there is a patch for 2.8.0.0 that fixes a problem that the slug (the url generate for a post) was not being done correctly, so multiple posts on the same day got group as one. This cause search and navigation issues. Worth installing this if you are likely to write more than one post on a blog a day.

Using SYSPREP’d VM images as opposed to Templates in a new TFS 2012 Lab Management Environment

An interesting change with Lab Management 2012 and SCVMM 2012 is that templates become a lot less useful. In the SCVMM 2008 versions you had a choice when you stored VMs in the SCVMM library. …

  • You could store a fully configured VM
  • or a generalised template.

When you added the template to a new environment you could enter details such as the machine name, domain to join and product key etc. If you try this with SCVMM 2012 you just see the message ‘These properties cannot be edited from Microsoft Test Manager’

image

So you are meant to use SCVMM to manage everything about the templates, not great if you want to do everything from MTM. However, is that the only solution?

An alternative is to store a SYSPREP’d VM as a Virtual Machine in the SCVMM library. This VM can be added as many times as is required to an environment (though if added more than once you are asked if you are sure)

image

This method does however bring problems of its own. When the environment is started, assuming it is network isolated, the second network adaptor is added as expected. However, as there is no agent on the VM it cannot be configured, usually for a template Lab Management would sort all this out, but because the VM is SYSPREP’d it is left sitting at the mini setup ‘Pick your region’ screen.

You need to manually configure the VM. So the best process I have found is

  1. Create the environment with you standard VMs and the SYSPRED’d one
  2. Boot the environment, the standard ready to use VMs get configured OK
  3. Manually connect to the SYSPREP’d VM and complete the mini setup. You will now have a PC on a workgroup
  4. The PC will have two network adapters, neither connected to you corporate network, both are connected to the network isolated virtual LAN. You have a choice
    • Connect the legacy adaptor to your corporate LAN, to get at a network share via SCVMM
    • Mount the TFS Test Agent ISO
  5. Either way you need to manually install the Test Agent and run the configuration (just select the defaults it should know where the test controller is). This will configure network isolated adaptor to the 192.168.23.x network
  6. Now you can manually join the isolated domain
  7. A reboot the VM (or the environment) and all should be OK

All a bit long winded, but does mean it is easier to build generalised VMs from MTM without having to play around in SCVMM too much. 

I think all would be a good deal easier of the VM had the agents on it before the SYSPREP, I have not tried this yet, but that is true in my option of all VMs used for Lab Management. Get the agents on early as you can, just speeds everything up.