But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Can I use the HP ALM Synchronizer with TF Service?

I recently tried to get the free HP ALM Synchronizer to link to Microsoft’s TF Service, the summary is it does not work. However, it took me a while to realise this.

The HP ALM Synchronizer was designed for TFS 2008/2010 so the first issue you hit is that TF Services is today basically TFS 2013 (and is a moving goal post as it is updated so often). This means when you try to configure the TFS connection in HP ALM Synchronizer  it fails because it cannot see any TFS client it supports. This is fairly simple to address, just install Visual Studio Team Explorer 2010 and patch it up to date so that it can connect to TF Service (you could go back to the 2008 and achieve the same if you really needed to)

Once you have a suitably old client you can progress to the point that it asks you for your TFS login credentials. HP ALM Synchronizer validates they are in the form DOMAIN\USER, this is a problem.

On TF Service you usually login with a LiveID, this is a non-starter in this case. However, you can configure alternative credentials, but these are in the form USER and PASSWORD. The string pattern verification on the credentials entry form in  HP ALM Synchronizer does not accept them, it must have a domain and slash. I could not find any pattern at satisfies both TF Service and the HP ALM Synchronizer setup tool. So basically you are stuck.

So for my client we ended up moving to a TFS 2013 Basic Install on premises and at all worked fine, they could sync HP QC defects into TFS using the HP ALM Synchronizer, so they were happy.

However, is there a better solution? One might be to a commercial product such as Tasktop Sync, this is designed to provide synchronisation services between a whole range of ALM like products. I need to find out if that supports TF Service as yet?

Problems with Microsoft Fake Stubs and IronPython

I have been changing the mocking framework used on a project I am planning to open source. Previously it had been using Typemock to mock out items in the TFS API. This had been working well but used features of the toolset that are only available on the licensed product (not the free version). As I don’t like to publish tests people cannot run I thought it best to swap to Microsoft Fakes as there is a better chance any user will have a version of Visual Studio that provides this toolset.

Most of the changes were straightforward but I hit a problem when I tried to run a test that returned a TFS IBuildDetails object for use inside an IronPython DSL.

My working Typemock based test was as follows

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
        // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();

         var tfsProvider = Isolate.Fake.Instance<ITfsProvider>();
         var emailProvider = Isolate.Fake.Instance<IEmailProvider>();
         var build = Isolate.Fake.Instance<IBuildDetail>();

         var testUri = new Uri("vstfs:///Build/Build/123");
         Isolate.WhenCalled(() => build.Uri).WillReturn(testUri);
         Isolate.WhenCalled(() => build.Quality).WillReturn("Test Quality");
         Isolate.WhenCalled(() => tfsProvider.GetBuildDetails(null)).WillReturn(build);

         // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);

        // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

Swapping to Fakes I got

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
        // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();
         var testUri = new Uri("vstfs:///Build/Build/123");

         var emailProvider = new Providers.Fakes.StubIEmailProvider();
         var build = new StubIBuildDetail()
                         {
                             UriGet = () => testUri,
                             QualityGet = () => "Test Quality",
                         };
         var tfsProvider = new Providers.Fakes.StubITfsProvider()
         {
             GetBuildDetailsUri = (uri) => (IBuildDetail)build
         };

         // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);
     

        // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

But this gave me the error

Test Name:    Can_use_Dsl_to_get_build_details
Test FullName:    TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details
Test Source:    c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor.Tests\Dsl\DslTfsProcessingTests.cs : line 121
Test Outcome:    Failed
Test Duration:    0:00:01.619

Result Message:    System.MissingMemberException : 'StubIBuildDetail' object has no attribute 'Uri'
Result StackTrace:   
at IronPython.Runtime.Binding.PythonGetMemberBinder.FastErrorGet`1.GetError(CallSite site, TSelfType target, CodeContext context)
at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1)
at Microsoft.Scripting.Interpreter.DynamicInstruction`3.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)
at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx)
at IronPython.Compiler.PythonScriptCode.Run(Scope scope)
at IronPython.Compiler.RuntimeScriptCode.InvokeTarget(Scope scope)
at IronPython.Compiler.RuntimeScriptCode.Run(Scope scope)
at Microsoft.Scripting.SourceUnit.Execute(Scope scope, ErrorSink errorSink)
at Microsoft.Scripting.SourceUnit.Execute(Scope scope)
at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope)
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, Dictionary`2 args, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor\Dsl\DslProcessor.cs:line 78
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor\Dsl\DslProcessor.cs:line 31
at TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details() in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor.Tests\Dsl\DslTfsProcessingTests.cs:line 141

If I altered my test to not use my IronPython DSL but call the C# DSL Library directly the error went away. So the issue lay in the dynamic IronPython engine – not something I am going to even think of trying to fix.

So I swapped the definition of the mock IBuildDetails to use Moq (could have used the free version of Typemock or any framework) instead of a Microsoft Fake Stubs and the problem went away.

So I had

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
         // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();
         var testUri = new Uri("vstfs:///Build/Build/123");

         var emailProvider = new Providers.Fakes.StubIEmailProvider();
         var build = new Moq.Mock<IBuildDetail>();
         build.Setup(b => b.Uri).Returns(testUri);
         build.Setup(b => b.Quality).Returns("Test Quality");

         var tfsProvider = new Providers.Fakes.StubITfsProvider()
         {
             GetBuildDetailsUri = (uri) => build.Object
         };

        // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);


         // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

So I have a working solution, but it is a bit of a bit of a mess, I am using Fake Stubs and Moq in the same test. There is no good reason not to swap all the mocking of interfaces to Moq. So going forward on this project I only use Microsoft Fakes for Shims to mock out items such as  TFS WorkItem objects which have no public constructor.

Get rid of that that zombie build

Whilst upgrading a TFS 2010 server to 2012 I had a problem that a build was showing in the queue as active after the upgrade. This build was queued in January, 10 months ago, so should have finished a long long time ago. This build had the effect that it blocked any newly queued builds, but the old build did not appear to be running on any agent – a zombie build.

I tried to stop it, delete it, everything I could think of, all to no effect. It would not go away.

In the end I had to use the brute force solution to delete the rows in the TPC’s SQL DB for the build. I did this in both the tbl_BuildQueue (use the QueueID) and tbl_Build (use the buildID) tables.

Fix for - Could not load file or assembly 'Microsoft.VisualStudio.Shell’ on TFS 2010 Build Controller

I have previously posted about when TFS build controllers don’t start properly. Well I saw the same problem today whilst upgrading a TFS 2010 server to TFS 2012.3. The client did not want to immediately upgrade their build processes and decided to keep their 2010 build VMs just pointing them at the updated server (remember TFS 2012.2 and later servers can support either 2012 or 2010 build controllers).

The problem was that when we restarted the build service the controller and agents appeared to start, but then we got a burst of errors in the event log and we saw the controller say it was ready, but have the stopped icon.

On checking the Windows error log we saw the issue was it could not load the assembly

Exception Message: Problem with loading custom assemblies: Could not load file or assembly 'Microsoft.VisualStudio.Shell, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. (type Exception)

Turns out this was because the StyleCop.VSPackage.dll has been checked into the build controllers CustomAssemblies folder (how and why we never found out, also why it had not failed before was unclear as it was checked in about 6 weeks ago!). Anyway as soon as the DLL was removed from the custom assemblies folder, leaving just the StyleCop files from the c:\program Files (x86)|StyleCop\4.7 folder all was OK.

Moving from Ubuntu to Mint for my TEE demos

I posted a while ago about the problems with DHCP using a Hyper-V virtual switch with WIFI and an Ubuntu VM, well I never found a good solution without hard coding IP addresses.

I recently tried using Mint 15 and was please to see this did not suffer the same problems, it seems happy with DHCP over Hyper-V virtual switches. I think I will give it a go a do a while for any cross platform TEE demos I need for a while.

Update: Just noticed that I still get a DHCP problem with Mint when I connect to my dual band Netgear N600 router via 2.4Ghz, but not when I use the same router via 5Ghz. I just know I am not at the bottom of this problem yet!

Where did that email go?

We use the TFS Alerts system to signal to our teams what state project build are at. So when a developer changes a build quality to ‘ready for test’ an email is sent to everyone in the team and we make sure the build retention policy is set to keep. Now this is not the standard behaviour of the TFS build alerts system, so we do all this by calling a SOAP based web service which in turn uses the TFS API.

This had all been working well until we did some tidying up and patching on our Exchange server. The new behaviour was:

  • Email sent directly via SMTP by the TFS Alert system  worked
  • Email sent via our web service called by the TFS Alert system disappeared, but no errors were shown

As far as we could see emails were leaving our web service (which was running as the same domain service account as TFS, but its own AppPool) and dying inside our email system, we presumed due to some spam filter rule?

After a bit of digging we spotted the real problem.

If you look at the advanced settings of the TFS Alerts email configuration it points out that if you don’t supply credentials for the SMTP server it passes those for the TFS Service process

image

Historically our internal SMTP server had allowed anonymous posting so this was not an issue, but in our tidy it now required authentication, so this setting became important.

We thought this should not be an issue as the TFS service account was correctly registered in Exchange, and it was working for the TFS generated alert emails, but on checking the code of the web service noticed a vital missing line, we were not setting the credentials on the message, we were leaving it as anonymous, so the email was being blocked

using (var msg = new MailMessage())
            {
                msg.To.Add(to);
                msg.From = new MailAddress(this.fromAddress);
                msg.Subject = subject;
                msg.IsBodyHtml = true;
                msg.Body = body;
                using (var client = new SmtpClient(this.smptServer))
                {
                    client.Credentials = CredentialCache.DefaultNetworkCredentials;
                    client.Send(msg);
                }

            }

Once this line was added and the web service redeployed it worked as expect again

Why is my TFS Lab build picking a really old build to deploy?

I was working on a TFS lab deployment today and to speed up testing I set it to pick the <Latest> build of the TFS build that actually builds the code, as opposed to queuing a new one each time. It took me ages to remember/realise why it kept trying to deploy some ancient build that had long since been deleted from my test system.

image

The reason was the <Latest> build means the last successful build, not the last partially successful build. I had a problem with my code build that means a test was failing (a custom build activity on my build agent was the root cause if the issue). Once I fixed my build box so the code build did not fail the lab build was able to pickup the files I expected and deploy them

Note that if you tell the lab deploy build to queue it’s own build it all attempt to deploy this even if it is partially successful.

Fix for ‘Cannot install test agent on these machines because another environment is being created using the same machines’

I recently posted about adding extra VMs to existing environments in Lab Management. Whilst following this process I hit a problem, I could not create my new environment there was a problem at the verify stage. It was fine adding the new VMs, but the one I was reusing gave the error ‘Microsoft test manager cannot install test agent on these machines because another environment is being created using the same machines’

image

II had seen this issue before and so I tried a variety of things that had sorted it in the past, removing the TFS Agent on the VM, manually installing and trying to configure them, reading through the Test Controller logs, all to no effect. I eventually got a solution today with the help of Microsoft.

The answer was to do the following on the VM showing the problem

  1. Kill TestAgentInstaller.exe process, if running on failing machine
  2. Delete “TestAgentInstaller” service from services, using sc delete testagentinstaller command (gotcha here, use a DOS style command prompt not PowerShell as sc has a different default meaning in PowerShell, it is an alias for set-content. if using PowerShell you need the full path to the sc.exe)
  3. Delete c:\Windows\VSTLM_RES folder
  4. Restart machine and then try Lab Environment creation again and all should be OK
  5. As usual once the environment is created you might need to restart it to get all the test agents to link up to the controller OK

So it seems that the removal of the VM from its old environment left some debris that was confusing the verify step. Seems this only happens rarely but can be a bit of a show stopper if you can’t get around it

Making the drops location for a TFS build match the assembly version number

A couple of years ago I wrote about using the TFSVersion build activity to try to sync the assembly and build number. I did not want to see build names/drop location in the format 'BuildCustomisation_20110927.17’, I wanted to see the version number in the build something like  'BuildCustomisation 4.5.269.17'. The problem as I outlined in that post was that by fiddling with the BuildNumberFormat you could easily cause an error where duplicated drop folder names were generated, such as

TF42064: The build number 'BuildCustomisation_20110927.17 (4.5.269.17)' already exists for build definition '\MSF Agile\BuildCustomisation'.

I had put this problem aside, thinking there was no way around the issue, until I was recently reviewing the new ALM Rangers ‘Test Infrastructure Guidance’. This had a solution to the problem included in the first hands on lab. The trick is that you need to use the TFSVersion community extension twice in you build.

  • You use it as normal to set the version of your assemblies after you have got the files into the build workspace, just as the wiki documentation shows
  • But you also call it in ‘get mode’ at the start of the build process prior to calling the ‘Update Build Number ‘ activity. The core issue being you cannot call ‘Update Build Number’ more than once else you tend to see the TF42064 issues. By using it in this manner you will set the BuildNumberFomat to the actual version number you want, which will be used for the drops folder and any assembly versioning.

So what do you need to do?

  1. Open you process template for editing (see the custom build activities documentation if you don’t know how to do this)
  2. Find the sequence ‘ Update Build Number for Triggered Builds’ and at the top of the process template

    image
    • Add TFSVersion activity – I called mine ‘Generate Version number for drop’
    • Add an Assign activity – I called mine ‘Set new BuildNumberFormat’
    • Add a WriteBuildMessage activity – This is option but I do like to see what it generated
  3. Add a string variable GeneratedBuildNumber with the scope of ‘Update Build Number for Triggered Builds’

    image
  4. The properties for the TFSVersion activity should be set as shown below

    image
    • The Action is the key setting, this needs to be set to GetVersion, we only need to generate a version number not set any file versions
    • You need to set the Major, Minor and StartDate settings to match the other copy of the activity in your build process. I good tip is to just cut and paste from the other instance to create this one, so that the bulk of the properties are correct
    • The Version needs to be set to you variable GeneratedBuildNumber this is the outputed version value
  5. The properties for the Assign activities are as follows

    image
    • Set To to BuildNumberFormat
    • Set Value to String.Format("$(BuildDefinitionName)_{0}", GeneratedBuildNumber), you can vary this format to meet your own needs [updated 31 Jul 13 - better to use an _ rarther than a space as this will be used in the drop path)
  6. I also added a WriteMessage activity that outputs the generated build value, but that is optional

Once all this was done and saved back to TFS it could be used for a build. You now see that the build name, and drops location is in the form

[Build name] [Major].[Minor].[Days since start date].[TFS build number]

image

This is a slight change from what I previously attempted where the 4th block was the count of builds of a given type on a day, now it is the unique TFS generate build number, the number shown before the build name is generated. I am happy with that. My key aim is reached that the drops location contains the product version number so it is easy to relate a build to a given version without digging into the build reports.

I can never remember the command line to add use to the TFS Service Accounts group

I keep forgetting when you use TFS Integration Platform that the user who the tool (or service account is running as a service) is running as has to be in the “Team Foundation Service Accounts” group on the TFS servers involved. If they are not you get a runtime conflict something like

Microsoft.TeamFoundation.Migration.Tfs2010WitAdapter.PermissionException: TFS WIT bypass-rule submission is enabled. However, the migration service account 'Richard Fennell' is not in the Service Accounts Group on server 'http://tfsserver:8080/tfs'.

The easiest way to do this is to use the TFSSecurity command line tool on the TFS server. Now you will find some older blog posts about setting the user as a TFS admin console user to get the same effect, but this only seems to work on TFS 2010. This command is good for all versions

C:\Program Files\Microsoft Team Foundation Server 12.0\tools> .\TFSSecurity.exe /g+ "Team Foundation Service Accounts
" n:mydomain\richard /server:http://localhost:8080/tfs

and expect to see

Microsoft (R) TFSSecurity - Team Foundation Server Security Tool
Copyright (c) Microsoft Corporation.  All rights reserved.

The target Team Foundation Server is http://localhost:8080/tfs.
Resolving identity "Team Foundation Service Accounts"...
s [A] [TEAM FOUNDATION]\Team Foundation Service Accounts
Resolving identity "n:mydomain\richard"...
  [U] mydomain\Richard
Adding Richard to [TEAM FOUNDATION]\Team Foundation Service Accounts...
Verifying...

SID: S-1-9-1551374245-1204400969-2333986413-2179408616-0-0-0-0-2

DN:

Identity type: Team Foundation Server application group
   Group type: ServiceApplicationGroup
Project scope: Server scope
Display name: [TEAM FOUNDATION]\Team Foundation Service Accounts
  Description: Members of this group have service-level permissions for the Team Foundation Application Instance. For se
rvice accounts only.

1 member(s):
  [U] mydomain\Richard

Member of 2 group(s):
e [A] [TEAM FOUNDATION]\Team Foundation Valid Users
s [A] [DefaultCollection]\Project Collection Service Accounts

Done.

Once this is done, and the integration platform run restarted all should be OK