But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Mixed mode assembly is built against version 'v2.0.50727' error using .NET 4 Development Web Server

If your application has a dependency on an assembly built in .NET 2 you will see the error below if you try to run your application when it has been built in.NET 4.

Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.

This can be important in VS2010 testing, as test projects must be built as .NET 4, there is no option to build with an older runtime. I suffered this problem when trying to do some development where I hosted a webpart that make calls into SharePoint (that was faked out with Typemock Isolator) inside a ASP.NET 4.0 test harness

The answer to this problem is well documented, you need to add the useLegacyV2RuntimeActivationPolicy attribute to a .CONFIG file, but which one? It is not the web.config file you might suspect, but the C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.exe.config file. The revised config file should read as follows (new bits in red)

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <startup useLegacyV2RuntimeActivationPolicy="true">
            <supportedRuntime version="v4.0" />   
  </startup>

  <runtime>
    <generatePublisherEvidence enabled="false" />
  </runtime>
</configuration>

Note: Don’t add the following <supportedRuntime version="v2.0.50727" />  this cause the web server to crash on start-up.

Post QCon thoughts

Interesting time at QCon yesterday,  shame I was only there one day, I do like the events that are not limited to a single vendor or technology. The multi presenter session I was involved in on Microsoft interoperability seemed to go well, there is talk of repeating at other events or podcasting. It is a nice format if you can get the sub-sessions linking nicely, like themed grok talks. 

Due to chatting to people (but that why you go really isn't it?), I only managed to get to one other session, but I was the one I wanted to see, Roy Osherove’s on using CThru to enable testing of monolithic frameworks such as Silverlight. It got a few things clearer in my mind over using CThu, a tool I have tried to use in the past but not had as much success as I hoped. So I think I will have another go at trying to build a SharePoint workflow testing framework, the problem has rested on the back burner too long. I think I just need to persist longer in digging to the eventing model to see why my workflows under test do not start. Roy’s comment that there is no short cut for this type of problem to avoid an archaeological excavation into the framework under test, I think is the key here.

Do you use a Symbol Server?

I find one of the most often overlooked new features of 2010 is the Symbol Server. This is a file share where the .PDB symbol files are stored for any given build (generated by a build server, see Jim Lamb’s post on the setup). If you look on the symbol server share you will see directories for each built assembly with a GUID named subdirectory containing the PDB files for each unique build.

So what is this Symbol Server used for? Well you can use the Symbol Server to enable debug features such as Intellitrace, vital if you are using Lab Manager. In effect this means that when viewing an Intellitrace log Visual Studio is able to go to the Symbol Server to get the correct .PDB file for the assemblies being run, even if the source is not available, thus allowing you to step through the code. It can also be used for remote debugging of ASP.NET servers.

A bonus is that you can debug release code, as long as you produced .PDB symbols and placed them on the Symbol Server when you built the release (by altering the advanced build properties shown below).

image

Key to remember here is that the client PC that generates the Intellitrace file does not need access to the PDB files, only the PC handling the debugging process needs to be able to access the symbols. Perfect for release codes scenarios.

This ability to debug into code that you don’t have the source for extends to debugging into Microsoft .NET framework code. Microsoft have made public a Symbol Server for just this purpose. To use it you have to enable it using the Tool > Option > Debugging > Symbols dialog.

image

All this should make debugging that hard to track problem just that bit easier.

MTLM becomes MTM

You may have noticed that Microsoft have had another burst of renaming. The tester’s tool in VS2010 started with the codename of Camaro during the CTP phase, this became Microsoft Test & Lab Manager (MTLM) in the Beta 1 and 2 and now in the RC it is call Microsoft Test Manager (MTM).

Other than me constantly referring to things by the wrong name, the main effect of this is to make searching on the Internet a bit awkward, you have to try all three names to get good coverage. In my small corner of the Internet, I will try to help by updating my existing MTLM tag to MTM and update the description appropriately.

Is Pex and Moles the answer to Sharepoint testing?

I have got round to watching Peli de Halleux’s presentation on testing SharePoint with moles from the SharePoint Connections 2010 event in Amsterdam, very interesting. This brings a whole new set of tools to the testing of Sharepoint. I think it is best to view the subject of this presentation in two parts Pex and Moles, even though they are from the same stable; Moles being produced to enable Pex. But rather than me explaining how it all works just watch the video.

So to my thoughts, the easier bit to consider is Pex. If you can express your unit tests in a parameterised manner then this is a great tool for you. The example that Peli gives of an event handler that parses a string is a good one. We all have loads of places where this type of testing is needed, especially in Sharepoint. The problem here, as he points out, is that you need to use some form of mocking framework to allow the easy execution of these tests for both developers and automated build servers. I would usually use Typemock Isolator to provide this mocking, the problem is that Pex and Isolator at this time can’t run together. The Pex Explorer does not start the Typemock mocking interceptor, and thus far I can’t find a way to get round the problem.

So enters Moles, this is Microsoft Research’s subbing framework that ‘detour any .NET method to user-defined delegates, e.g., replace any call to the SharePoint Object Model by a user-defined delegate’. Now I find the Moles syntax is a bit strange. I suspect it is down to my past experience, but I still find the Typemock Isolator AAA syntax easier to read than Moles’. However, there are some nice wrapper classes provided to make it easier to use the Moles framework with Sharepoint.

So where does this leave us? At this time if you want to use Pex (and I certainly would like to) you have to use Moles (if you need stubbing). But you also have to remember that Pex & Moles are research projects. They are available for commercial evaluation, but at this time there seems to be no plans to productise it or roll it into Visual Studio, this means on effect no support. I don’t see either of these points as being a major barrier, as long as you make the choice to accept them knowingly.

However for ultimate flexibility it would be really nice to see Typemock Isolator being interoperable Pex, thus allowing me to use the new techniques of Pex against legacy tests already written using Isolator.

Errors Faking Sharepoint with Typemock due to assembly versions

I was doing some work today where I needed to fake out a SPWeb object. No problem you think, I am using Typemock Isolator so I just use the line

var fakeWeb = Isolate.Fake.Instance<SPWeb>();

But I got the error

Microsoft.SharePoint.SPWeb.SPWebConstructor(SPSite site, String url, Boolean bExactWebUrl, SPRequest request)
Microsoft.SharePoint.SPWeb..ctor(SPSite, site, String url)
eo.CreateFakeInstance[T](Members behavior, Constructor constructorFlag, Constructor baseConstructorFlag, Type baseType, Object[] ctorArgs)
eo.Instance[T](Members behavior)
(Points to the SPWeb web line as source of error)
TypeMock.MockManager.a(String A_0, String A_1, Object A_2, Object A_3, Boolean A_4, Object[] A_5)
TypeMock.InternalMockManager.getReturn(Object that, String typeName, String methodNAme, Object methodParameters, Boolean isInjected)
(Points to line 0 of my test class)

This seemed strange I was doing nothing clever, and something I have done many times before. Turns out the issue was the version of the Typemock assemblies I was referencing. I have referenced the 5.4 versions, once I repointed to the new 6.0 (or I suspect older 5.3 ones) all was OK.

At last, my creature it lives……..

I have at last worked all the way through setting up my portable end to end demo of  testing using Windows Test and Lab Manager. The last error I had to resolve was the tests not running in the lab environment (though working locally on the development PC). My the Lab Workflow build was recorded as a partial success i.e. it built, it deployed but all the tests failed.

I have not found a way to see the detail of why the tests failed in VS2010 Build Explorer. However, if you:

  1. Go into MTLM,
  2. Pick Testing Center
  3. Select the Test Tab
  4. Pick the Analyze Test Results link
  5. Pick the test run you want view
  6. The last item in the summary is the error message , as you can see in my case it was that the whole run failed not any of the individual tests themselves

image

So my error was “Build directory of the test run is not specified or does not exist”. This was caused because the Test Controller (for me running as Network Service) could not see the contents of the drop directory. The drop directory is where the test automation assemblies are published as part of the build. Once I gave Network Service read rights to access the \\TFS2010\Drops share my tests, and hence my build, ran to completion.

It has been a interesting journey to get this system up and running. MTLM when you initially look at it is very daunting, you have to get a lot of ducks in a row and there are many pitfalls on the way. If any part fails then nothing works, it feels like a bit of a house of cards. However if you work though it step by step I think you will come to see that the underlying architecture of how it hangs together is not as hard to understand as it initially seems. It is complex and has to be done right, but you can at least see why things need to be done. Much of this perceived complexity for me a developer is that I had to setup a number of ITPro products I am just not that familiar with such as SCOM and Hyper-V Manager. Maybe the answer is to make your evaluation of this product a joint Dev/ITPro project so you both learn.

I would say that getting the first build going (and hence the underlying infrastructure) seems to be the worst part. I feel that now I have a platform I understand reasonably, that producing different builds will not be too bad. I suspect the next raft of complexity will appear when I need a radically different test VM (or worse still a networks of VMs) to deploy and test against.

So my recommendation to anyone who is interest in this product is to get your hands dirty, you are not going to understand it by reading or watching videos, you need to build one. So find some hardware, lots of hardware!

ASPNETCOMPILER: error 1003 with TFS2010 Team build

I have been looking at TFS 2010 Lab Manager recently. One problem I had was that using the sample code from the Lab Manager Blog Walkthru the building of the CALC ASP.NET web site failed on the build server, I got an error

ASPNETCOMPILER: error 1003 The directory ‘c:\build\1LabWalkthru\Calculator –Build\Calc’ does not exist.

and the build service was right it didn’t exist; it should have been ‘c:\build\1LabWalkthru\Calculator –Build\Source\Calc’.

This was due to a problem detailed here. The Solution file had the wrong path in the Debug.AspNetCompiler.PhysicalPath property. It was set to “..\Calc” when it should have been “.\Calc”. Once this was altered the build could find the files.

So you want to demo VS2010 Lab Manager…….

I recently decided to build a demo system for VS2010 Lab Manager. This was for a number of reasons, not least I just wanted to have a proper play with it, but also that I was hoping to do a session on Microsoft Test and Lab Manager at DDD8 (as it turns out my session did not get voted for, maybe better luck for DDS, you can still vote for that conference’s sessions).

Anyway if any of you have looked at the Lab Manager side of MTLM you will know that getting it going is no quick task. Firstly I cannot recommend highly enough the Lab Management Teams’ blog posts ‘Getting started with Lab Management’ Parts 1, 2 ,3 and 4. This type of walkthrough post is a great way to move into a new complex product such as this. It provides the framework to get you going, it doesn’t fix all your problems but gives you a map to follow into the main documentation or other blog posts.

The architecture I was trying to build was as below. My hardware was a Shuttle PC as this was all I could find in the office that could take 8Gb of memory, the bare minimum for this setup. Not as convenient as a laptop for demos, but I was not going to bankrupt myself getting an 8Gb laptop!

image

As I wanted my system to be mobile, it needed to be it’s own domain (demo.com). This was my main problem during the install. MTLM assumes the host server and all the VMs are in the same domain, but that the domain controller (DC) is on some other device on the domain. I installed the DC on the host server; this meant I had to do the following to get it all to work (I should say I did all of these to get my system running, but they may not all be essential, but they are all sensible practice so probably worth doing)

  • Run the VMM Host as a user other than the default of Local System (this is an option set during the installation). The default Local System user has reduced rights on a domain controller, and so is not able to do all that it needs to. I create a new domain account (demo\VMMserver) and used this as the service account for the VMM.
  • The ‘Getting Started’ blog posts suggest a basic install of TFS, this just installs source control, work item tracking and build services using a SQL Express instance. This is fine, but this mode defaults to using the Network Service account to run the TFS web services. This has the same potential issues as the Local System account on the DC, so I swapped this to use a domain account (demo\TFSservice) using the TFS Administration console. 
  • AND THIS IS THE WIERD ONE AND I SUSPECT THE MOST IMPORTANT. As I was using the host system as a DNS and DHCP the VMs needed to be connected to the physical LAN of the host machine to make use of these services. However as I did not want them to pickup my office’s DHCP service I left the physical server’s Ethernet port unplugged. This meant that when I tried to create a new lab environment I got a TF259115 error. Plugging in a standalone Ethernet hub (connected to nothing else) fixed this problem. I am told this is because part of the LAN stack on the physical host is disabled due to the lack of a physical Ethernet link, even though the DNS and DHCP services were unaffected. The other option would have been to run the DNS, DHCP etc on Hyper-V VM(s).
  • When configuring the virtual lab in TFS Administration console the ‘Network Location’ was blank. If you ignore this missing Network location or manually enter it you get a TF259210 error when you verify the settings in TFS Administration. This is a known problem in SCVMM and was fixed by overriding the discovered network and entering demo.com.

So I now had a working configuration, but when I try to import my prepared test VM into Lab Center, I got an “Import failed, the specified owner is not a valid Active Directory Domain Services account, Specify a valid  Active Directory Domain Services account and try again” error. If I check the SCVMM jobs logs (in SCVMM Admin console) I saw this was an Error 813 in the ‘create hardware setup’ step. However, the account the job was running as was a domain user, as was the service account the host was running on (after I had made the changes detailed above) as I was confused.

This turns out to be a user too stupid error; I was logged in as the TFS servers local administrator (tfs2010\administrator) not the domain one (demo\administrator), or actually any domain account with VMM administrator rights. Once I logged in on the TFS server (where I was running MTLM) as a domain account all was OK. Actually I suspect moving to the VMMService and TFSService accounts was not vital, but did not harm.

I could now create my virtual test environment and actually start to create Team Builds that make use of my test lab environment. Also I think having worked though these problems I have a better understanding of how all the various parts underpinning MTLM hang together, a vital piece of knowledge if you intend to make real use of these tools.

Oh and thanks to everyone who helped me when I got stuck