But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Running MSDeploy to a remote box from inside a TFS 2010 Build (Part 2)

Another follow up post, this time to the one on MSDeploy. As I said in that post a better way to trigger the MSDeploy PowerShell script would be as part of the build workflow, as opposed to a post build action in the MSBuild phase. Doing it this way means if the build failed testing, after MSBuild complete, you can still choose not to run MSDeploy.

I have implemented this using an InvokeProcess call in my build workflow, which I have placed just before Gated checking logic at the end of the process template.

image

The if statement is there so I only deploy if a deploy location is set and all the tests passed

BuildDetail.TestStatus = Microsoft.TeamFoundation.Build.Client.BuildPhaseStatus.Succeeded And
String.IsNullOrEmpty(DeployLocation) = False

The InvokeProcess filename property is

BuildDetail.DropLocation & "\_PublishedWebsites\" & WebSiteAssemblyName & "_Package\" & WebSiteAssemblyName & ".deploy.cmd"

Where “WebSiteAssemblyName” is a build argument the name of the Project that has been publish (I have not found a way to automatically detect it) e.g. BlackMarble.MyWebSite. This obviously as be set as an argument for the build if the deploy is to work

The arguments property is set to

"/M:http://" & DeployLocation & "/MSDEPLOYAGENTSERVICE /Y”

Again the “DeployLocation” is a build arguement that is the name of the server to deploy to e.g. MyServer

The Result property is set to an Integer build variable, so any error code can be returned in the WriteBuildError

This seems to work for me and I think it is neater than previous solution

How to edit a TFS 2010 build template when it contains custom activities.

I posted a while ago on using my Typemock TMockRunner Custom Activity for Team Build 2010. I left that post with the problem that if you wished to customise a template after you a had added the custom activity you had to use the somewhat complex branching model edit the XAML.

If you just followed the process in my post to put the build template in a new team project and tried to edit the XAML you got the following errors, an import namespace error and the associated inability to render part of the workflow

image

The best answer I have been able to find has been to put the custom activity into the GAC on the PC what you wish to edit the template on, just there nowhere else the method in the previous post is fine for build agents. So I strongly signed the custom activity assembly, used GACUTIL to put it in my GAC and was then able to load the template without any other alterations. I as also able to add it to my Visual Studio toolbox so that I could drop new instances of the external test runner onto the workflow.

Getting code coverage working on Team Build 2010

If you have VS2010 Premium or Ultimate [Professional corrected error in orginal post]  you have code coverage built into the test system. When you look at your test results there is a button to see the code coverage

image

You would think that there is easy way to use the code coverage in your automated build process using Team Build 2010, well it can done but you have to do a bit of work.

What’s on the build box?

Firstly if your build PC has only an operating system and the Team Build Agent (with or without the Build Controller service) then stop here. This is enough to build many things but not to get code coverage. The only way to get code coverage to work is to have VS2010 Premium or Ultimate also installed on the build box.

Now there is some confusion in blog posts over if you install the Visual Studio 2010 Test Agents do you get code coverage, the answer for our purposes is no. The agents will allow remote code coverage in a Lab Environment via a Test Controller, but they do not provide the bits needs to allow code coverage to be run locally during a build/unit test cycle.

Do I have a .TestSettings file?

Code Coverage is managed using your solution’s .TestSetting file. My project did not have one of these, so I had to ‘add new item’ it via add on a right click in the solution items.

The reason I had no .TestSettings file was because I started with an empty solution and added projects to it, if you start with a project, such as a web application, and let the solution be created for you automatically then there should be a .TestSettings file created.

In the test settings you need to look at the Data & Diagnostics tab and enable code coverage and then press the configure button, this is important.

image

On the configuration dialog will see a list of your projects and assemblies. In my case initially I only saw the first and the last rows in the graphic below. I selected the first row, the project containing my production code and tried a build.

THIS DID NOT WORK – I had to added the actual production assembly as opposed to the web site project (the middle row shown below). I think this was the key step to getting it going.

The error I got before I did this was Empty results generated: none of the instrumented binary was used. Look at test run details for any instrumentation problems.  So if you see this message in the build report check what assemblies are flagged for code coverage.

image

Does my build definition know about the .TestSettings file?

You now need to make sure that build knows the .TestSettings file exists. Again this should be done automatically when you create a build (if the file exists), but on my build I had to add it manually as I created the file after the build.

image

 

 

 

So when all this is done you get to see a build with test results and code coverage.

image

Easy wasn’t it!

Next weeks Agile Yorkshire meeting: Some things about testing that everyone should know, ...... but were afraid to ask, in case somebody told them.

It is Agile Yorkshire time again, it is a real shame that due to the move of the meeting from the 2nd Wednesday to the 2nd Tuesday I really struggle to make the events. Particularly irritating this month as this one look really interesting and the speaker, Ralph Williams, from past evidence always is entertaining. To quote the Agile Yorkshire site the session will..

“The presentation will focus on the techniques that testers use to identify their tests, whether working from a requirements specification or on agile teams.

Agile testing books mostly focus on the agile aspects or the technology so this area often gets glossed over. The main sections would be:

    • Equivalence Classes and Boundary Conditions
    • Decision Tables
    • Classification Trees
    • User Focused Testing

There will be a group exercise looking a how these techniques can be applied to the testing of a well known website.

As a group we will go through the process of identifying the testing that is required and in the process explain various test techniques that might be useful to people back in their day jobs.”

For full details see http://www.agileyorkshire.org/event-announcements/10Aug2010

Running SPDisposeCheck as part of a 2010 CI Build

SPDisposeCheck is a great tool for SharePoint developers to make sure that they are disposing of resources correctly. The problem is it is a bit slow to run, a problem as this will mean developers will tend not to run it as often as they should. A good solution to the problem is to run it as part of the continuous integration process. There is are posts on how to do this via unit tests and as a MSBuild task, but I wanted to use a TFS 2010 style build. Turns out this is reasonably straight forward without the need to write a custom activity.

  • I created a build template based on the Default one.
  • After the compile and before the test step I added a InvokeProcess activity

image

  • I set the InvokeProcess properties as shown below, the edited settings are
    • Arguments: String.Format(“””{0}””””, outputDirectory) (remember you need the enclosing “ if your path could have spaces in it)
    • Filename: To the location of the SPDisposeCheck.exe file
    • Result: A previously created build variable of type Int32

image

image

  • This is done with a simple if check. If there are any errors found I write a build error message and set the TestStatus to failed. You might choose to set the build status to fail or any other flag you wish. The potential problem with my solution is that the TestStatus value could be reset by the tests that follow in the build process, but for a basic example of using the tool this is fine.

So it is easy to added a command line tool to the build. The key reason it is so easy is that SPDisposeCheck returns a number that we can use to see if the test passed or failed. hence we did not need to parse any text or XML results file. I wish more tools did this.

IDD Building a breakfast comment to a become process – now there is a leap

Gil at Typemock has been posting about some ideas we discussed over breakfast at the Typemock Partner conference a while ago, I have been a bit slow at commenting, so I though I better add to the conversation. Though Typemock is an excellent mocking framework, for me basic mocking is not its biggest win. All the ‘classic auto mocking’ of interfaces to speed TDD style working is great, but I can do that with any of the .NET mocking frameworks. All they do is mean I don’t have to write my own test stubs and mocks, so saving me time, which is good but not the keep win I was looking for.

For me there is another way to save much more time and that is to reduce my ‘build, deploy, use’ cycle. In the land of SharePoint this is a significant time saving, or at least has been for us. It has meant that I can replace the build, create WSP, deploy WSP, let SharePoint/IIS restart and then view a web part, with a build and view in ASP.NET page that uses Typemock to fake out all to SharePoint calls. This is what Gil has termed Isolation Driven Development (IDD) Now isn’t a three letter _DD name going a bit far, I am even not sure there enough in it for me to write a book!

That said this is a solid technique which can be applied to any complex environment where developers or testers need a means to mock out significant, costly, or just slow components to ease there daily work process, often enabling some manual testing process, thus making them more productive. If you read the TPS books it mentions a lot how workers should optimise their work space to reduce wasted time the spend moving between machines or roles, this is just such a move.

So if you want to use the technique for Sharepoint have a look at my post, I hope it will save you time whether on SP2007 or 2010, or maybe apply same technique to other technologies.

Today’s DDD South West

Thanks to everyone who turned up for my session at DDD South West, and to the organisers for putting the event on so well.

As my session was basically a 1 hour demo of the testing tools in VS2010 there are no slides for me to upload, but if you have any questions ping me an email. I would say that for a good overview of the subject have a look at the book ‘Professional Application Lifecycle Management with Visual Studio 2010: with Team Foundation Server 2010

Mocking Sharepoint for Testing

In my previous post I talked about using Isolator to mock Sharepoint to aid the speed of the development process. I find this a productive way of working, but it does not really help in the realm of automated testing. You need a way to programmatically explore a webpart, preferably outside of SharePoint to check its correctness.

You could use the methods in my previous post and some form of automated web test, but this does mean you need to spin up a web server of some descriptions (IIS, Cassini etc. and deploy to it) An alternative is look at the Typmock addin Ivonna. This a creates a fake web server to load you page and tools to explore it.

I will describe how to use this technique using the same example as my previous post.

Previously I had placed all the code to fake out SharePoint in the Page_Load event of the test harness page. As I am now trying to write an unit/integration test I think it better to move this into the test itself so I would delete code I placed in the Page_Load event other than any property settings on the actual webpart. I would actually refactor the lines creating the fake URL context and fake SPSite into some helper methods and then call them from my new test. I would then load the page in Ivonna and check it’s values.

I have tried to show this below, using a couple of techniques to show how to get to components in the page.

   1: [TestMethod, Isolated]
   2:      public void LoadWebPage_SpSimpleWebPart_3EntriesInList()
   3:      {
   4:          // Arrange
   5:          TestHelpers.CreateFakeSPSite();
   6:          TestHelpers.CreateFakeURL();
   7:  
   8:          TestSession session = new TestSession(); //Start each test with this
   9:          WebRequest request = new WebRequest("/SpSimpleTest.aspx"); //Create a WebRequest object
  10:          
  11:          // Act
  12:          WebResponse response = session.ProcessRequest(request); //Process the request
  13:  
  14:          // Assert
  15:          //Check the page loaded
  16:          Assert.IsNotNull(response.Page);
  17:          
  18:          // the the Ivonna extension method to find the control
  19:          var wp = response.Page.FindRecursive<DemoWebParts.SpSimpleWebPart>("wp1");
  20:          Assert.IsNotNull(wp);
  21:  
  22:          // so we have to use the following structure and dig knowing the format
  23:          // webpart/table/row/cell/control
  24:          var label = ((TableRow)wp.Controls[0].Controls[0]).Cells[1].Controls[0] as Label;
  25:          Assert.IsNotNull(label);
  26:          Assert.AreEqual("http://mockedsite.com",label.Text);
  27:  
  28:          var list = ((TableRow)wp.Controls[0].Controls[1]).Cells[1].Controls[0] as DropDownList;
  29:          Assert.IsNotNull(list);
  30:          Assert.AreEqual(3, list.Items.Count);
  31:      }

Now I have to say I had high hopes for this technique, but it has not been as useful as I would hope. I suspect that this is due to the rapid changing of the UI design of client’s webparts making these tests too brittle. We have found the ‘mark 1 eyeball’ more appropriate in many cases, as is so often true for UI testing.

However, I do see this as being a great option for smoke testing in long running projects with fairly stable designs.