But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Lessons learnt building a custom activity to run Typemock Isolator in VS2010 Team Build

Updated 25th March 2010 - All the source is now available at the Typemock Add-in site 
Updated 2nd July 2010 - Some usage notes posted
Updated 26th July 2011 - More usage notes
Updated 21st Nov 2011 - Typemock Isolator now has direct support for TFS 2010 Build, see usage notes 

I have previously posted on how you can run Typemock Isolator based tests within a VS2010 using the InvokeMethod activity. After this post Gian Maria Ricci, a fellow Team System MVP suggested it would be better to put this functionality in a custom code activity, and provided the basis of the solution. I have taken this base sample and worked it up to be a functional activity, and boy have I learnt a few things doing it.

Getting the custom activity into a team build

Coding up a custom Team Build activity is not easy, there are a good few posts on the subject (Jim Lamb’s is a good place to start). The problem is not writing the code but getting the activity into the VS toolbox. All documentation gives basically the same complex manual process, there is no way of avoiding it. Hopefully this will be addressed in a future release of Visual Studio. But for now the basic process is this:

  1. Create a Class Library project in your language of choice
  2. Code up your activity inheriting it from the CodeActivity<T> class
  3. Branch the build workflow, that you wish to use for testing, into the folder of the class library project
  4. Add the build workflow’s .XAML file to the class library project then set it’s properties: “build action” to none and “copy to output directory” to do not copy
  5. Open the .XAML file (in VS2010), the new activity should appear in the toolbox, it can be dropped onto the workflow. Set the properties required.
  6. Check in the file .XAML file
  7. Merge the .XAML file to the original location, if you get conflicts simply tell merge to use the new version discarding the original version, so effectively overwriting the original version with the version edited in the project.
  8. Check in the merged original .XAML file that now contains the modifications.
  9. Take the .DLL containing the new activity and place it in a folder under source control (usually under the BuildProcessTemplates folder)
  10. Set the Build Controller’s custom assemblies path to point to this folder (so your custom activity can be loaded) 

    image
  11. Run the build and all should be fine

But of course is wasn’t. I kept getting the error when I ran a build

TF215097: An error occurred while initializing a build for build definition \Typemock Test\BuildTest Branch: Cannot create unknown type '{clr-namespace:TypemockBuildActivity}ExternalTestRunner'.

This was because I had not followed the procedure correctly. I had tried to be clever. Instead of step 6 and onwards I had had an idea. I created a new build that referenced the branched copy of the .XAML file in the class library project directly. I thought this would save me a good deal of tedious merging while I was debugged my process. It did do this but introduced other issues

The problem was when I inspected the .XAML in my trusty copy of Notepad, I saw that there was no namespace declared for my assembly (as the TF21509 error suggested). If I looked at the actual activity call in the file it was declared as <local:ExternalTestRunner  …… />, the local: replacing the namespace reference I would expect. This is obviously down to the way I was editing the .XAML file in the VS2010.

The fix is easy, using Notepad I added a namespace declaration to the Activity block

<Activity ……    xmlns:t="clr-namespace:TypemockBuildActivity;assembly=TypemockBuildActivity" >

and then edited the references from local: to t: (the alias for my namespace) for any classes called from the custom assembly e.g.

<t:ExternalTestRunner ResultsFileRoot="{x:Null}" BuildNumber="[BuildDetail.Uri.ToString()]" Flavor="[platformConfiguration.Configuration]" sap:VirtualizedContainerService.HintSize="200,22" MsTestExecutable="C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe" Platform="[platformConfiguration.Platform]" ProjectCollection="http://typhoon:8080/tfs/DefaultCollection" Result="[ExternalTestRunnerResult]" ResultsFile="ExternalTestRunner.Trx" SearchPathRoot="[outputDirectory]" TeamProjectName="[BuildDetail.TeamProject]" TestAssemblyNames="[testAssemblies.ToArray()]" TestRunnerExecutable="C:\Program Files (x86)\Typemock\Isolator\6.0\TMockRunner.exe" TestSettings="[localTestSettings]" />

Once this was done I could use my custom activity in a Team Build, though I had to make this manual edit every time I edited the branched .XAML file in VS2010 IDE. So I had swapped repeated merges with repeated editing, you take your view as to which is worst.

So what is in my Typemock external test runner custom activity?

The activity is basically the same as the one suggest by Gian Maria, it takes all the same parameters as the MSTest team build activity and then executes the TMockRunner to wrapper MSTest. What I have done is add a couple of parameters that were missing in the original sample and also added some more error traps and logging.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Activities;
using System.IO;
using System.Diagnostics;
using Microsoft.TeamFoundation.Build.Workflow.Activities;
using Microsoft.TeamFoundation.Build.Client;
using System.Text.RegularExpressions;
 
namespace TypemockBuildActivity
{
    public enum ExternalTestRunnerReturnCode { Unknown =0 , NotRun, Passed, Failed };
 
    [BuildExtension(HostEnvironmentOption.Agent)]
    [BuildActivity(HostEnvironmentOption.All)]
    public sealed class ExternalTestRunner : CodeActivity<ExternalTestRunnerReturnCode>
    {
        // Define an activity input argument of type string  
 
        /// <summary>
        /// The name of the wrapper application, usually tmockrunner.exe
        /// </summary>
        public InArgument<string> TestRunnerExecutable { get; set; }
        
        /// <summary>
        /// The name of the application that actually runs the test, defaults to MSTest.exe if not set
        /// </summary>
        public InArgument<string> MsTestExecutable { get; set; }
 
        /// <summary>
        /// The project collection to publish to e.g. http://tfs2010:8080/tfs/DefaultCollection
        /// </summary>
        public InArgument<string> ProjectCollection { get; set; }
 
        /// <summary>
        /// The build ID to to publish to e.g. vstfs:///Build/Build/91
        /// </summary>
        public InArgument<string> BuildNumber { get; set; }
 
        /// <summary>
        /// The project name to publish to e.g: "Typemock Test"
        /// </summary>
        public InArgument<string> TeamProjectName { get; set; }
 
        /// <summary>
        /// The platform name to publish to e.g. Any CPU
        /// </summary>
        public InArgument<string> Platform { get; set; }
 
        /// <summary>
        /// The flavour (configuration) to publish to e.g. "Debug"
        /// </summary>
        public InArgument<string> Flavor { get; set; }
 
 
        /// <summary>
        /// Array of assembly names to test
        /// </summary>
        public InArgument<string[]> TestAssemblyNames { get; set; }
        
        /// <summary>
        /// Where to search for assemblies under test
        /// </summary>
        public InArgument<string> SearchPathRoot { get; set; }
        
        /// <summary>
        /// A single name result file
        /// </summary>
        public InArgument<string> ResultsFile { get; set; }
 
        /// <summary>
        /// A directory to store results in (tends not be used if the ResultFile is set)
        /// </summary>
        public InArgument<string> ResultsFileRoot { get; set; }
 
        /// <summary>
        /// The file that list as to how test should be run
        /// </summary>
        public InArgument<string> TestSettings { get; set; }
 
 
        // If your activity returns a value, derive from CodeActivity<TResult> 
        // and return the value from the Execute method. 
        protected override ExternalTestRunnerReturnCode Execute(CodeActivityContext context)
        {
            String msTestOutput = string.Empty;
            ExternalTestRunnerReturnCode exitMessage = ExternalTestRunnerReturnCode.NotRun;
 
            if (CheckFileExists(TestRunnerExecutable.Get(context)) == false)
            {
                LogError(context, string.Format("TestRunner not found {0}", TestRunnerExecutable.Get(context)));
            }
            else
            {
                String mstest = MsTestExecutable.Get(context);
                if (CheckFileExists(mstest) == false)
                {
                    mstest = GetDefaultMsTestPath();
                }
 
                String testrunner = TestRunnerExecutable.Get(context);
 
                var arguments = new StringBuilder();
                arguments.Append(string.Format("\"{0}\"", mstest));
                arguments.Append(" /nologo ");
 
                // the files to test
                foreach (string name in TestAssemblyNames.Get(context))
                {
                    arguments.Append(AddParameterIfNotNull("testcontainer", name));
                }
 
                // settings about what to test
                arguments.Append(AddParameterIfNotNull("searchpathroot", SearchPathRoot.Get(context)));
                arguments.Append(AddParameterIfNotNull("testSettings", TestSettings.Get(context)));
                
                // now the publish bits
                if (string.IsNullOrEmpty(ProjectCollection.Get(context)) == false)
                {
                    arguments.Append(AddParameterIfNotNull("publish", ProjectCollection.Get(context)));
                    arguments.Append(AddParameterIfNotNull("publishbuild", BuildNumber.Get(context)));
                    arguments.Append(AddParameterIfNotNull("teamproject", TeamProjectName.Get(context)));
                    arguments.Append(AddParameterIfNotNull("platform", Platform.Get(context)));
                    arguments.Append(AddParameterIfNotNull("flavor", Flavor.Get(context)));
                }
 
                // where do the results go, tend to use one of these not both
                arguments.Append(AddParameterIfNotNull("resultsfile", ResultsFile.Get(context)));
                arguments.Append(AddParameterIfNotNull("resultsfileroot", ResultsFileRoot.Get(context)));
 
                LogMessage(context, string.Format("Call Mstest With Wrapper [{0}] and arguments [{1}]", testrunner, arguments.ToString()), BuildMessageImportance.Normal);
 
                using (System.Diagnostics.Process process = new System.Diagnostics.Process())
                {
                    process.StartInfo.FileName = testrunner;
                    process.StartInfo.WorkingDirectory = SearchPathRoot.Get(context);
                    process.StartInfo.WindowStyle = ProcessWindowStyle.Normal;
                    process.StartInfo.UseShellExecute = false;
                    process.StartInfo.ErrorDialog = false;
                    process.StartInfo.CreateNoWindow = true;
                    process.StartInfo.RedirectStandardOutput = true;
                    process.StartInfo.Arguments = arguments.ToString();
                    try
                    {
                        process.Start();
                        msTestOutput = process.StandardOutput.ReadToEnd();
                        process.WaitForExit();
                        // for TypemockRunner and MSTest this is alway seems to be 1 so does not help tell if test passed or not
                        //  In general you can detect test failures by simply checking whether mstest.exe returned 0 or not.  
                        // I say in general because there is a known bug where on certain OSes mstest.exe sometimes returns 128 whether 
                        // successful or not, so mstest.exe 10.0 added a new command-line option /usestderr which causes it to write 
                        // something to standard error on failure.
 
                        // If (error data received)
                        //    FAIL
                        // Else If (exit code != 0 AND exit code != 128)
                        //    FAIL
                        // Else If (exit code == 128)
                        //    Write Warning about weird error code, but SUCCEED
                        // Else
                        //   SUCCEED
 
                        ///int exitCode = process.ExitCode;
                        LogMessage(context, string.Format("Output of ExternalTestRunner: {0}", msTestOutput), BuildMessageImportance.High);
                    }
                    catch (InvalidOperationException ex)
                    {
                        LogError(context, "ExternalTestRunner InvalidOperationException :" + ex.Message);
                    }
 
                    exitMessage = ParseResultsForSummary(msTestOutput);
                }
            }
            LogMessage(context, string.Format("ExternaTestRunner exiting with message [{0}]", exitMessage), BuildMessageImportance.High);
            return exitMessage;
        }
 
        /// <summary>
        /// Adds a parameter to the MSTest line, it has been extracted to allow us to do a isEmpty chekc in one place
        /// </summary>
        /// <param name="parameterName">The name of the parameter</param>
        /// <param name="value">The string value</param>
        /// <returns>If the value is present a formated block is return</returns>
        private static string AddParameterIfNotNull(string parameterName, string value)
        {
            var returnValue = string.Empty;
            if (string.IsNullOrEmpty(value) == false)
            {
                returnValue = string.Format(" /{0}:\"{1}\"", parameterName, value);
            }
            return returnValue;
       }
 
        /// <summary>
        /// A handler to check the results for the success or failure message
        /// This is a rough way to do it, but is more reliable than the MSTest exit codes
        /// It returns a string as opposed to an  exit code so that it 
        /// Note this will not work of the /usestderr flag is used
        /// </summary>
        /// <param name="output">The output from the test run</param>
        /// <returns>A single line summary</returns>
        private static ExternalTestRunnerReturnCode ParseResultsForSummary(String output)
        {
            ExternalTestRunnerReturnCode exitMessage = ExternalTestRunnerReturnCode.NotRun;
            if (Regex.IsMatch(output, "Test Run Failed"))
            {
                exitMessage = ExternalTestRunnerReturnCode.Failed;
            }
            else if (Regex.IsMatch(output, "Test Run Completed"))
            {
                exitMessage = ExternalTestRunnerReturnCode.Passed;
            }
            else
            {
                exitMessage = ExternalTestRunnerReturnCode.Unknown;
            }
 
            return exitMessage;
        }
 
        /// <summary>
        /// Handles finding MSTest, checking both the 32 and 64 bit paths
        /// </summary>
        /// <returns></returns>
        private static string GetDefaultMsTestPath()
        {
            String mstest = @"C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe";
            if (CheckFileExists(mstest) == false)
            {
                mstest = @"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe";
                if (CheckFileExists(mstest) == false)
                {
                    throw new System.IO.FileNotFoundException("MsTest file cannot be found");
                }
            }
            return mstest;
        }
 
        /// <summary>
        /// Helper method so we log in both the VS Build and Debugger modes
        /// </summary>
        /// <param name="context">The workflow context</param>
        /// <param name="message">Our message</param>
        /// <param name="logLevel">Team build importance level</param>
        private static void LogMessage(CodeActivityContext context, string message, BuildMessageImportance logLevel)
        {
            TrackingExtensions.TrackBuildMessage(context, message, logLevel);
            Debug.WriteLine(message);
        }
 
        /// <summary>
        /// Helper method so we log in both the VS Build and Debugger modes
        /// </summary>
        /// <param name="context">The workflow context</param>
        /// <param name="message">Our message</param>
        private static void LogError(CodeActivityContext context, string message)
        {
            TrackingExtensions.TrackBuildError(context, message);
            Debug.WriteLine(message);
        }
 
        /// <summary>
        /// Helper to check a file name to make sure it not null and that the file it name is present
        /// </summary>
        /// <param name="fileName"></param>
        /// <returns></returns>
        private static bool CheckFileExists(string fileName)
        {
            return !((string.IsNullOrEmpty(fileName) == true) || (File.Exists(fileName) == false));
        }
    }
}

 

This activity does need a good bit of configuring to use it in a real build. However, as said previously, the options it takes are basically those needed for the MSTest activity, so you just replace the existing calls to the MSTest activities as shown in the graph below should be enough.

image

Note: The version of the ExternalTestRunner activity in this post does not handle tests based on Metadata parameters (blue box above), but should be OK for all other usages (it is just that these parameters have not been wired through yet). The red box show the new activity in place (this is the path taken if the tests are controlled by a test setting file) and the green box contains an MSTest activity waiting to be swapped out (this is the path taken if no test setting or metadata files are provided).

The parameters on the activity in the red box are as follows, as said before they are basically the same as parameters for the standard MSTest activity.

image

The Result parameter (the Execute() method return value) does need to be associated with a variable declared in the workflow, in my case ExternalTestRunnerResult. This is defined at the sequence scope, the scope it is defined at must be such that it can be read by any other steps in the workflow that require the value. It is declared as being of the enum type ExternalTestRunnerReturnCode defined in the custom activity.

image

Further on in the workflow you need to edit the if statement that branches on whether the tests passed or not to use this ExtenalTestRunnerResult value

image

Once all this is done you should have all your MSTests running inside a Typemock’ed wrapper and all the results should be shown correctly in the build summary

image

And the log of the build should show you all the parameters that got passed through to the MSTest program.

image

Is there a better way to test a custom activity project?

Whilst sorting out the logic for the custom activity I did not want to have to go through the whole process of running the team build to test the activity, it just took too long. To speed this process I did the following

  1. In my solution I created a new Console Workflow project
  2. I referenced my custom activity project from this new workflow project
  3. I added my custom activity as the only item in my workflow
  4. For each parameter of the custom activity I created a matching argument for the workflow and wired the two together.

    image
  5. I then created a Test Project that referenced the workflow project and custom activity project.
  6. In this I could write unit tests (well more integration tests really) that exercise many of the options in the custom activity. To help in this process I created some simple Test Projects assemblies that contained just passing tests, just failing test and a mixture of both.
  7. A sample test is shown below
    [TestMethod]
    public void RunTestWithTwoNamedAssembly_OnePassingOneFailingTestsNoPublishNoMstestSpecified_FailMessage()
    {
     
        // make sure we have no results file, MSTest fails if the file is present
        File.Delete(Directory.GetCurrentDirectory() + @"\TestResult.trx");
     
        var wf = new Workflow1();
     
        Dictionary<string, object> wfParams = new Dictionary<string, object>
        {
            { "BuildNumber", string.Empty },
            { "Flavour", "Debug" },
            { "MsTestExecutable", string.Empty },
            { "Platform", "Any CPU" },
            { "ProjectCollection",string.Empty },
            { "TeamProjectName", string.Empty },
            { "TestAssemblyNames", new string[] { 
                Directory.GetCurrentDirectory() + @"\TestProjectWithPassingTest.dll",
                Directory.GetCurrentDirectory() + @"\TestProjectWithfailingTest.dll"
            }},
            { "TestRunnerExecutable", @"C:\Program Files (x86)\Typemock\Isolator\6.0\TMockRunner.exe" },
            { "ResultsFile", "TestResult.trx" }
        };
     
     
        var results = WorkflowInvoker.Invoke(wf, wfParams);
     
        Assert.AreEqual(TypemockBuildActivity.ExternalTestRunnerReturnCode.Failed, results["ResultSummary"]);
    }
  8. The only real limit here is that some of the options (the publishing ones) need a TFS server to be tested. You have to make a choice as to whether this type of publishing test is worth the effort of filling your local TFS server with test runs  from the test project or whether you want to test these features manually in a real build environment, especially give the issues I mention in my past post

 

 

 

 

 

Summary

So I have a working implementation of a custom activity that makes it easy to run Typemock based tests without losing any of the other features of a Team Build. Butt as I learnt getting around the deployment issues can be a real pain.

The Teamprise Eclipse plug in for TFS gets a new name

As I am sure you remember a few months ago Microsoft bought Teamprise and their Java clients for TFS. Well the team has got out their first Microsoft branded release, details can be found on Martin Woodward’s and Brian Harry’s blogs. This beta provides the first support for TFS2010

This release is very timely as I will be talking on the Java integration via the Eclipse plug-in at QCON next week and at the Architect Insight Conference at the end of the month. This  “Eaglestone” release means I can hopefully do my demos against TFS2010.

The importance of using parameters in vs2010 build workflows

I have been doing some more work integrating Typemock and VS 2010 Team Build. I have just wasted a good few hours wondering why my test results are not being published.

If I looked at the build log I saw my tests ran (and pass or failed as expected) and then were published without error.

image

But when I checked the build summary it said there were no tests associated with the build, it reporting “No Test Results”

image

This was strange it had been working in the past. After much fiddling around I found the problem, it was twofold:

  • The main problems was that in my InvokeMethod call to run Typemock/MSTest I had hard coded the Platform: and Flavor: values. This meant irrespective of the build I asked for, I published my test results to the Any CPU|Debug configurations. MSTest lets you do this, even if no build of that configuration exists at the time.

My InvokeMethod argument parameter should be been something like

"""C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe""  /nologo /testcontainer:""" + String.Format("{0}\Binaries\BusinessLogic.Tests.dll", BuildDirectory) + """ /publish:""http://typhoon:8080/tfs/DefaultCollection"" /publishbuild:""" + BuildDetail.Uri.ToString() + """ /teamproject:""" + BuildDetail.TeamProject + """ /platform:""" + platformConfiguration.Platform + """ /flavor:""" + platformConfiguration.Configuration + """ /resultsfile:""" + String.Format("{0}\Binaries\Test.Trx", BuildDirectory) + """  "

  • The second issues was that I had failed, on at least one of my test build definitions, to set the Configurations to Build setting. This meant the build defaulted to Mixed Platforms|Debug (hence not matching my hard coded Any CPU|Debug configuration). Interesting to note here if that the parameters used above (platformConfiguration.Configuration and platformConfiguration.Platform ) are both empty if the Configurations to Build setting is not set. MSBuild is the activity that chooses the defaults not the workflow. So in effect you must always set these values for your build, or you will need to handle these empty strings in the workflow if you don’t want MSTest to fail saying the Platforms and Flavour parameters are empty. Seem to me explicitly setting them is good practice anyway.

image

So the technical tip here is make sure that you correctly us all the parameters associated with a workflow in activities. You cannot trust an activity to give and error or warning if you pass it strange values.

Do you use a Symbol Server?

I find one of the most often overlooked new features of 2010 is the Symbol Server. This is a file share where the .PDB symbol files are stored for any given build (generated by a build server, see Jim Lamb’s post on the setup). If you look on the symbol server share you will see directories for each built assembly with a GUID named subdirectory containing the PDB files for each unique build.

So what is this Symbol Server used for? Well you can use the Symbol Server to enable debug features such as Intellitrace, vital if you are using Lab Manager. In effect this means that when viewing an Intellitrace log Visual Studio is able to go to the Symbol Server to get the correct .PDB file for the assemblies being run, even if the source is not available, thus allowing you to step through the code. It can also be used for remote debugging of ASP.NET servers.

A bonus is that you can debug release code, as long as you produced .PDB symbols and placed them on the Symbol Server when you built the release (by altering the advanced build properties shown below).

image

Key to remember here is that the client PC that generates the Intellitrace file does not need access to the PDB files, only the PC handling the debugging process needs to be able to access the symbols. Perfect for release codes scenarios.

This ability to debug into code that you don’t have the source for extends to debugging into Microsoft .NET framework code. Microsoft have made public a Symbol Server for just this purpose. To use it you have to enable it using the Tool > Option > Debugging > Symbols dialog.

image

All this should make debugging that hard to track problem just that bit easier.

Speaking at QCon on TFS and Java Integration

Week after next I will be speaking at QCon London with Simon Thurman of Microsoft on “The Interoperable Platform”.

So what does that title mean? Well for me, for this session, it will be about how you can use the ALM features of TFS even when using Eclipse for Java development. So it will be a demo led session on the Teamprise tools for Eclipse and how they can allow you to build a unified development team that works in both .NET and Java.

Should be an interesting event, the list of speaker looks great. Shame I will only be there for a day

Logging results from InvokeProcess in a VS2010 Team Build

When you use the InvokeProcess activity, as I did in my Typemock post, you really need to setup the logging. This is because by default nothing will be logged other than the command line invoked, not usually the best option. There are a couple of gotta’s here that initially caused me a problem and I suspect could cause a new user of the 2010 build process a problem too.

The first is that you need to declare the variable names for the InvokeProcess to drop the output and errors into. This is done in the workflow designer putting the variable names in the relevant textboxes (there is no need to declare the variable names anywhere else) as shown below. Use any name you fancy, I used stdOutput and stdError.

image

You then need to add the WriteBuildMessage and WriteBuildError activities by dragging them from the toolbox into the hander areas of the InvokeProcess activity.

The second gotta is that the WriteBuildMessage takes a logging level parameter. This defaults to normal, this means the message will not be displayed in the standard build view (unless the build’s detail level is altered). To get ground this, as I would normally want to see the output of the process being invoked, I would set the Importance of the message to High. Remember you also need to set the Message parameter to the previously declared variable name, in my case stdOutput. This is done in the properties windows as shown below.

image

Note that you don’t need to set an importance on the WriteBuildError activity as this is automatically always displayed, you just need to set the Message parameter to stdError.

Once you make these changes and run the build, you see the output of the command line (green) in the build log as well as the command line (red). This should help with debugging your InvokeProcess activities in your build process.

image

MTLM becomes MTM

You may have noticed that Microsoft have had another burst of renaming. The tester’s tool in VS2010 started with the codename of Camaro during the CTP phase, this became Microsoft Test & Lab Manager (MTLM) in the Beta 1 and 2 and now in the RC it is call Microsoft Test Manager (MTM).

Other than me constantly referring to things by the wrong name, the main effect of this is to make searching on the Internet a bit awkward, you have to try all three names to get good coverage. In my small corner of the Internet, I will try to help by updating my existing MTLM tag to MTM and update the description appropriately.

So where have I been all week?

A bit a a double question here, physically I have been at the the MVP Summit in Redmond, having a great time with my fellow “Team System” MVPs and the Microsoft product group members.

4368620611_d1ce34e06a

But my blog has also been on and off all week, so I guess you could say my online presence has been away. This is because Black Marble has moved office and our blog server has had intermittent connectivity, which hopefully should be resolved soon.

At last, my creature it lives……..

I have at last worked all the way through setting up my portable end to end demo of  testing using Windows Test and Lab Manager. The last error I had to resolve was the tests not running in the lab environment (though working locally on the development PC). My the Lab Workflow build was recorded as a partial success i.e. it built, it deployed but all the tests failed.

I have not found a way to see the detail of why the tests failed in VS2010 Build Explorer. However, if you:

  1. Go into MTLM,
  2. Pick Testing Center
  3. Select the Test Tab
  4. Pick the Analyze Test Results link
  5. Pick the test run you want view
  6. The last item in the summary is the error message , as you can see in my case it was that the whole run failed not any of the individual tests themselves

image

So my error was “Build directory of the test run is not specified or does not exist”. This was caused because the Test Controller (for me running as Network Service) could not see the contents of the drop directory. The drop directory is where the test automation assemblies are published as part of the build. Once I gave Network Service read rights to access the \\TFS2010\Drops share my tests, and hence my build, ran to completion.

It has been a interesting journey to get this system up and running. MTLM when you initially look at it is very daunting, you have to get a lot of ducks in a row and there are many pitfalls on the way. If any part fails then nothing works, it feels like a bit of a house of cards. However if you work though it step by step I think you will come to see that the underlying architecture of how it hangs together is not as hard to understand as it initially seems. It is complex and has to be done right, but you can at least see why things need to be done. Much of this perceived complexity for me a developer is that I had to setup a number of ITPro products I am just not that familiar with such as SCOM and Hyper-V Manager. Maybe the answer is to make your evaluation of this product a joint Dev/ITPro project so you both learn.

I would say that getting the first build going (and hence the underlying infrastructure) seems to be the worst part. I feel that now I have a platform I understand reasonably, that producing different builds will not be too bad. I suspect the next raft of complexity will appear when I need a radically different test VM (or worse still a networks of VMs) to deploy and test against.

So my recommendation to anyone who is interest in this product is to get your hands dirty, you are not going to understand it by reading or watching videos, you need to build one. So find some hardware, lots of hardware!