But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

More fun with creating TFS 2012 SC-VMM environments

Whilst setting up new a new SC-VMM based lab environment I managed to find some new ways to fail above and beyond the ones I have found before.

We needed to build a new environment for testing CRM application, this needed to have its own DC, IIS server and a CRM server. The aim was to have this as a network isolated environment, but you have to build it first as the various VMs.

So we did the following

  • On the Hyper-V hosts managed by our SC-VMM server create 3 new VMs connect to our corporate LAN
  • Install the OS on the three VMs
  • Make one of the a DC for dev.local
  • Join the others to the DC’s domain dev.local (they are not joined to our corporate domain)
  • On the IIS box add the web server role
  • On the CRM box install Microsoft CRM

So we now have a three box domain that does what we want, but it is not network isolated. We could have used the features of SC-VMM to push these VMs into the library and hence import into the Lab Management library. However we choose to make sure we could connect to them first as an environment.

So first I tried to create a standard environment, not using SC-VMM. I had to create a local hosts file on the PC running MTM, but once this was done I could verify the environment, so all was OK. I did not actually create it.

Next I tried to create a SC-VMM based environment and this is where I hit a problem. I was basically trying to do something I have done before with all our pre-lab management test VMs, wrapper existing VMs in an environment. When I tried to do this the verification failed, saying I could not connect to any of the VMs. First we made sure file sharing was enable, firewalls were not blocking etc. All to no effect.

To cut a long story short I had a number of issues, mostly down to the reuse of names

  • The SC-VMM VM names for the VMs (e.g. LabDC) did not match the actual host name (DevDC). I had to rename the VM in SC-VMM to match the name of the host in the operating system (I am still unsure if this is a red herring and not really that important, but I think it is good practice anyway)
  • We had to have a hosts file on the MTM box with the fully qualified names for the three boxes (not just the server name). Not that this hosts entry (or could be DNS if you want) is only needed until the environment is built
  • 192.168.200.102 devcrm.dev.local
    192.168.200.103 deviis.dev.local
    192.168.200.104 devdc.dev.local

  • The name DevDC has been used on another VM that was being run on one of our developers Windows 8 Hyper-V setup. This was causing problems when MTM tried to resolve the machine name via SMB (Netbios, IP resolution was fine). Switching off this other VM fixed this, we only spotted it by using Wireshark on the PC running MTM (note you have to run the installer in Win7 compatibility mode to get it to work with Windows 8)
  • When entering the login details for development domain when creating the new environment in MTM  the user ID had to be entered as administator@dev.local and not dev\administrator

Once this was all done I could verify my environment and create it, the TFS agent was installed, but did not connect to the test controller. This is exactly as expect as details in my previous post.

I now have a few choices

  • If I don’t want to network isolate it I can install a Test Controller in the domain
  • I can save each of the three VMs into the SC-VMM library via MTM and create an isolated environment.

So I hope this helps you avoid some of the problems I have seen, I just wish that the MTM environment creation step gave out a better log file so i don’t have to second guess it or use wireshark.

Getting a SQL LocalDB to create an ASPNETDB data base without aspnet_regsql

I started working today on a solution I had not worked on for a while. It makes use of an ASP.NET web application as a site to host SharePoint webparts (using Typemock to mock out any troublesome calls). The problem I had was that when I opened this VS2010 solution in VS2012 I could not run up this test web site. As the test web pages have WebpartManager controls it needs an ASPNETDB in the AppData folder to persist the settings data. This is usually auto created when SQLExpress is installed, problem is with VS2012 you get the newer LocalDB and I am trying to avoid installing SQLExpress

So the first step was to modify the web.config to point to the right place, by adding

<connectionStrings>
    <clear/>
    <add name="LocalSQLServer" connectionString="Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" providerName="System.Data.SqlClient"/>
  </connectionStrings>

I then loaded the web site but got the error

An error occurred during the execution of the SQL file 'InstallMembership.sql'. The SQL error number is -2 and the SqlException message is: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.

after a retry I saw

An error occurred during the execution of the SQL file 'InstallCommon.sql'. The SQL error number is 5170 and the SqlException message is: Cannot create file 'C:\PROJECTS\SABS\SOURCE\SABS\SABSWEBSERVICETESTHARNESS\APP_DATA\ASPNETDB_TMP.MDF' because it already exists. Change the file path or the file name, and retry the operation.
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
Creating the ASPNETDB_5689a053209d438db3622d593ea632fb database...

So I decided to try the aspnet_regsql.exe in wizard mode from the .NET 4 framework folder to populate a pre-created DB, this gave the same timeout errors as seen when it was run by the web process

So finally I tried the following process

  1. Created a new empty DB in the APPDATA folder attached to my LocalDB instance in SQL Server Object Explorer
  2. From the .NET framework folder loaded and then ran the following SQL scripts (in the order they were listed in the folder)
  3. ---        16/08/2012     12:59      24603 InstallCommon.sql
    ---        16/08/2012     12:59      56073 InstallMembership.sql
    ---        16/08/2012     12:59      52347 InstallPersistSqlState.sql
    ---        16/08/2012     12:59      34950 InstallPersonalization.sql
    ---        16/08/2012     12:59      20891 InstallProfile.SQL
    ---        16/08/2012     12:59      34264 InstallRoles.sql

  4. Made sure my test harness targeted .NET 4
  5. and my test harness loaded

Not a great solution but it got me working, especially as I could find little on ASPNETDB and LocalDB

TFS Test Agent cannot connect to Test Controller – Part 2

I posted last week on the problems I had had getting the test agents and controller in a TFS2012 Standard environment talking to each other and a workaround. Well after a good few email with various people at Microsoft and other consultants at Black Marble I have a whole range of workarounds solutions.

First a reminder of my architecture, and note that this could be part of the problem, it is all running on a single Hyper-V host. Remember this is a demo rig to show the features of Standard Environments. I think it is unlikely that this problem will be seen in a more ‘realistic’ environment i.e. running on multiple boxes

 

image

 

The problem is that when the test agent running on the Server2008 should request the test controller (running the on VSTFS server) should call it back on either it 169.254.x.x address or on abn address obtained via DHCP from the external virtual switch. However the problem is it is requesting a call back on 127.0.0.1, as can be seen in the error log

Unable to connect to the controller on 'vstfs:6901'. The agent can connect to the controller but the controller cannot connect to the agent because of following reason: No connection could be made because the target machine actively refused it 127.0.0.1:6910. Make sure that the firewall on the test agent machine is not blocking the connection.

The root cause

It turns out the root cause of this problem was I had edited the c:\windows\system32\drivers\etc\hosts file on the test server VM to add an entry to allow a URL used in CodedUI tests to be resolved to the localhost

127.0.0.1   www.mytestsite.com

Solution 1 – Edit the test agent config to bind to a specific address

The first solution is the one I outlined in my previous post, tell the test agent to bind to a specific IP address. Edit

C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\QTAgentService.exe.config

and added a BindTo line with the correct address for the controller to call back to the agent

<appSettings>
     // other bits …
      <add key="BindTo" value="169.254.1.1"/>
</appSettings>

The problem with this solution you need to remember to edit a config file, all seems a bit complex!

Solution 2 – Don’t resolve the test URL to localhost

Change the hosts file entry used by the CodedUI test to resolve to the actual address of the test VM e.g.

169.254.1.1   www.mytestsite.com

Downside here is you need to know the test agents IP address, which depending on the system in use could change, and will certainly be different on each test VM in an environment. Again all seems a bit complex and prone to human error.

Solution 3 – Add an actual loopback entry to the hosts file.

The simplest workaround which Robert Hancock at Black Marble came up with was to add a second entry to the hosts file for the name loopback

127.0.0.1   localhost
127.0.0.1   www.mytestsite.com

Once this was done the test agent could connect, I did not have to edit any agent config files, or know the address the agent need to bind to. By far the best solution

 

So thanks to all who helped get to the bottom of this surprisingly complex issue.

TFS Test Agent cannot connect to Test Controller gives ‘No connection could be made because the target machine actively refused it 127.0.0.1:6910’

Updated 1st October – See the Part 2 post which provides more workaround solutions

Whilst setting up a  TFS 2012 Standard Lab Environment for an upcoming demo I hit a problem. Initially my environment had worked fine, I could deploy to my server VM in the environment without error. However, after a reboot of the TFS server (which has the build and test controllers on it) and the single server VM in the environment, the test agent on the VM could not connect to the test controller on the TFS SERVER. The VM’s event log showed

Unable to connect to the controller on 'tfsserver:6901'. The agent can connect to the controller but the controller cannot connect to the agent because of following reason: No connection could be made because the target machine actively refused it 127.0.0.1:6910. Make sure that the firewall on the test agent machine is not blocking the connection.

The key here was test controller was being told to call back to the test agent on 127.0.0.1 – which is obviously wrong being the loopback address.

So it seems the test agent was telling the test server the wrong IP address, not sure why it was resolving this address but I did find a workaround, on the test VM I edited 

‘C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\QTAgentService.exe.config’

and added the BindTo line with the correct address for the controller to call back to the agent

<appSettings>
     // other bits …
      <add key="BindTo" value="10.0.0.1"/>
</appSettings>

Once I restarted the test agent it connected to the controller and I could run my builds.

For more details on this config file see http://msdn.microsoft.com/en-us/library/ff934571.aspx

Experiences upgrading an MVC 1 application to MVC 3

I have recently had to do some work on a MVC 1 application and thought it sensible to bring it up to MVC 3, you don’t want to be left behind if you can avoid it. This was a simple data capture application written in MVC1 in Visual Studio 2008 and never needed to be touched since. A user fills in a form, the controller then takes the form contents and stores it. The key point to note here is that it was using the Controller.UpdateModel<TModel> Method (TModel, IValueProvider) method, so most of the controller actions look like

[AcceptVerbs(HttpVerbs.Post)]
       public ActionResult PostDataToApplication(FormCollection form)
       {
           NewApplication data = new NewApplication();
           try
           {
               UpdateModel(data, form.ToValueProvider());
              
               // process data object
               bool success = DoSomething(data);

               if (success)
               {
                   return RedirectToAction("FormSuccess", "Home");
               }
               else
               {
                   return RedirectToAction("FormUnsuccessful", "Home");
               }
           }
           catch (InvalidOperationException)
           {
               return View();
           }
       }

This worked fine on MVC 1 on VS2008, but I wanted to move it onto MVC3 on VS2012 if possible; with a little changes as possible as this is a small web site that has very few changes so not worth a major investment in time to keep updated to current frameworks. So these were steps I took and gotcha’s I found

The upgrade itself

First I opened the VS2008 solution in VS2010 and it automatically upgraded to MVC2, a good start!

I then used the MVC2 to MVC3 tool on Codeplex, this initially failed and it took me a while to spot that you can only use this tool if your MVC2 application targets .NET 4. Once I changed the MVC2 to target .NET 4 as opposed to 3.5 this upgrade tool worked fine.

I could now load my MVC3 application in either VS2010 or VS2012.

Using the Web Site

At this point I though I had better test it, and instantly saw a problem. Pages that did not submit data worked fine, but submitting a data capture forms failed with Null Exception errors. Turns out the problem was a change in default behaviour of the models between MVC releases. On MVC1 empty fields on the form were passed as empty strings, with MVC 2(?) and later they are passed as nulls.

Luckily the fix was simple. Previously my model had been

public class SomeModel: IDataErrorInfo
{
     public string AgentNumber { get; set; }
     …..
}

I needed to add a  [DisplayFormat(ConvertEmptyStringToNull = false)] attribute on  each string property to get back to the previous behaviour my controller expected

public class SomeModel: IDataErrorInfo
{
    [DisplayFormat(ConvertEmptyStringToNull = false)]
     public string AgentNumber { get; set; }
     …..
}

Now my web site ran as I had expected.

Unit Tests

I had previously noticed my unit tests were failing. I had expected the change to the model would fix this too, but it did not. On the web there a good many posts as to how unit testing of MVC2 and later fails unless you mock out the controller.context. You see errors in the form

Test method Website.Tests.Controllers.HomeControllerTest.DetailsValidation_AlphaInValidAccountID_ErrorMessage threw exception:
System.ArgumentNullException: Value cannot be null.
Parameter name: controllerContext
Result StackTrace:   
at System.Web.Mvc.ModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext)
   at System.Web.Mvc.ModelValidator.CompositeModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext)
   at System.Web.Mvc.ModelValidator.GetModelValidator(ModelMetadata metadata, ControllerContext context)
   at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext)
   at System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model)
   at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
   at System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
   at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, IValueProvider valueProvider)
   at Website.Controllers.HomeController.Details(FormCollection form)

The fix is to not just new up a controller in your unit tests like this

HomeController controller = new HomeController();

But to have a helper method to mock it all out (which is created for MVC associated test projects for you, so it is easy)

private static HomeController GetHomeController()
{
   IFormsAuthentication formsAuth = new MockFormsAuthenticationService();     
   MembershipProvider membershipProvider = new MockMembershipProvider();
   RoleProvider roleProvider = new MockRoleProvider();

   AccountMembershipService membershipService = new AccountMembershipService(membershipProvider, roleProvider);
   HomeController controller = new HomeController(formsAuth, membershipService);
   MockHttpContext mockHttpContext = new MockHttpContext();

   ControllerContext controllerContext = new ControllerContext(mockHttpContext, new RouteData(), controller);
   controller.ControllerContext = controllerContext;
   return controller;
}

However, this problem with a missing context was not my problem, I was already doing this. The error my test runner was showing did not mention the context, rather binding errors.

Test method CollectorWebsite.Tests.Controllers.HomeControllerTest.CardValidation_AlphaInValidAccountID_ErrorMessage threw exception:
System.NullReferenceException: Object reference not set to an instance of an object.
Result StackTrace:   
at CollectorWebsite.Models.CardRecovery.get_Item(String columnName) 
   at System.Web.Mvc.DataErrorInfoModelValidatorProvider.DataErrorInfoPropertyModelValidator.Validate(Object container)
   at System.Web.Mvc.ModelValidator.CompositeModelValidator.<Validate>d__5.MoveNext()
   at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext)
   at System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model)
   at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
   at System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
   at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, IValueProvider valueProvider)
   at CollectorWebsite.Controllers.HomeController.CardRecovery(FormCollection form)

I got stuck here for a good while………

Then it occurred to me if the behaviour has changed such that on the web site I see nulls when I expect empty strings, I bet the same is happening in unit tests. It is trying to iterate though what was a collection of strings and is now at best a collection of nulls or just an empty collection. The bind failed as it could not match the form to the data.

The fix was to make sure in my unit tests I passed in a FormCollection that had all the expected fields (with suitable empty values e.g string.empty). This meant my unit tests changed from

[TestMethod, Isolated]
public void ApplicationValidation_CurrentPostcodeNumbersOnly_ErrorMessage()
{

           // Arrange
           HomeController controller = GetHomeController();
      
    FormCollection form = new FormCollection();
           form.Add("CurrentPostcode", "12345");

           // Act
           ViewResult result = controller.Application(form) as ViewResult;

           // Assert
           Assert.IsNotNull(result);
           Assert.AreEqual("Please provide a valid postcode", result.ViewData.ModelState["CurrentPostcode"].Errors[0].ErrorMessage);

       }

To

[TestMethod, Isolated]
public void ApplicationValidation_CurrentPostcodeNumbersOnly_ErrorMessage()
{

           // Arrange
           HomeController controller = GetHomeController();
          
FormCollection form = GetEmptyApplicationFormCollection();
           form.Set("CurrentPostcode", "12345");

           // Act
           ViewResult result = controller.Application(form) as ViewResult;

           // Assert
           Assert.IsNotNull(result);
           Assert.AreEqual("Please provide a valid postcode", result.ViewData.ModelState["CurrentPostcode"].Errors[0].ErrorMessage);

       }

where the GetEmptyApplicationFormCollection() helper method just creates a FormCollection with all the forms fields.

Once this was done my unit test passed.

Summary

So I now have an MVC3 application that works and passes unit tests. You could argue I should do more work so it does not need these special fixes, but it meets my needs for now.

Type ‘InArgument(mtbwa:BuildSettings)’ of property ‘BuildSettings’ errors in TFS 2012 RTM builds

I posted a while ago that you saw errors when trying to edit TFS 2012RC build process templates in VS 2012RC if the Visual Studio class library project you were using to manage the process template editing was targeting .NET 4.5, it needed to be 4.0. Well with Visual Studio 2012 RTM this is no longer the case, in fact it is the other way around.

I have recently upgraded our TFS 2012 RC –> RTM and I today came to edit one of our build process templates (using the standard method to edit a process template with custom activities) and got the following error when I tried to open the XAML process template for editing

System.Xaml.XamlException: 'The type ‘InArgument(mtbwa:BuildSettings)’ of property ‘BuildSettings’ could not be resolved.' Line number '3' and line position '38'.

 

image

At first I assumed it was my custom activities, so I tried editing the DefaultTemplate.11.1.xaml in the same manner, but got the same problem.

Strangely I found that if I had no solution open in Visual Studio then I could just double click on the DefaultTemplate.11.1.xaml file in Source Control Explorer and it opened without error. However, if I had a solution open in the same instance of VS2012 that contained a class library project that linked to the same XAML file I got the error. Unloading the project within the solution allowed me to open the file via Source Control Explorer, reloading the project again stopped it loading.

So it all pointed to something about the containing class library project stopping referenced assemblies loading. On checking the project properties I saw that it was targeting .NET 4.0 (as required for the RC), as soon as I changed this to .NET 4.5 it was able to load all the required Team Foundation assemblies and I was able to edit both the default template and my custom build process template.

Experiences upgrading our TFS2012RC to RTM

We have just completed the upgraded of our TFS2012 server from RC to RTM. All went smoothly, just a few comments worth mentioning

  • The install of TFS2012 (after the removal of the RC) required three reboots, 2 for C++ components and one for .NET 4.5. So  if seeing reboots don’t worry too much.
  • When running the upgrade wizard we got a verify warning over port 443 already being used (we had manually configured via IIS manager for our server to use 8080 and 443). We ignored this warning. However after the upgrade wizard had completed, with no errors, we found that the new web server could not start. Turns out it it had been left bound as HTTP to Port 443, so it was very confused. We just deleted this binding and re-added HTTP on 8080 and HTTPS on 433 with our wildcard certificate and it was fine. So in hindsight we should have headed the warning and removed our custom bindings.

So now off to the long job of upgrading build box, test controller and the rest.

Moving podcast subscriptions with Zune

If like me you listen to many podcasts, then swapping the PC your Phone7 syncs to collect the podcasta is a real pain. The problem being as far as I can see Zune has no podcast subscription export/import, so you are left with a lot of copy typing to re-enter them.

Whilst rebuilding my PC with Windows 8 today I have at least found a work around

  1. On your old Pc open you ‘c:\user\[user]\My Podcasts’ folder (shown in Windows Explorer as the ‘Podcast’ folder).
  2. You will see a folder for each podcast you are subscribed to
  3. Copy the whole folder to the same location on your new PC (I did via a USB drive as I was reformatting the disk on the same PC)
  4. Install Zune on the new PC
  5. Open Zune and look in the Collection>Podcasts, you should see all your podcast – but your are not subscribed yet
  6. In Zune, highlight and select all podcasts
  7. Right click and you should see  a Subscribe option, select it.
  8. Zune now sorts itself out re-subscribing and checking for new programmes
  9. It gets a bit confused over what you have watched so might pull them down again. Also you might want to alter subscription settings for specific podcasts as it will default back to just 3 programmes.
  10. When you are happy with your settings just drag the podcasts onto your newly resync’d mobile device to finish the job.
  11. You might need to look at the podcasts that are on the device as seems it does not removed one previously there via Zune (again it seems unsure of what you have listened too)

Not perfect but better than trying to removed load of site URLs

TF900546 error on a TFS 2012 build

Whilst moving over to our new TFS 2012 installation I got the following error when a build tried to run tests

TF900546: An unexpected error occurred while running the RunTests activity: 'Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.'.

This was a new one on me, and nothing of much use on the web other than a basic MSDN page.

Turns out the immediate fix is to just restart the build controller. Initially I did this after switching to the default build process template, and setting it to NOT load any custom activities, but I seems a simple restart would have been enough as once I re-enabled all custom activities it still worked.

As to the root cause I have no idea, one to keep an eye on, especially as I am currently on the RC, lets see what the RTM build does.

Using an internal Nuget server to manage the Typemock assembly references.

In my last post I discussed the process I needed to go through to get Typemock Isolator running under TFS 2012. In this process I used the Auto Deploy feature of Isolator. However this raised the  question of how to manage the references within projects. You cannot just assume the Typemock assemblies are in the GAC, they are not on the build box using auto deploy. You could get all projects to reference the auto deployment location in source control. However, if you use build process templates across projects it might be you do not want to have production code referencing build tools in the build process are directly.

For most issues of this nature we now use Nuget. At Black Marble we make use of the public Nuget repository for tools such as XUnit, SpecFlow etc. but we also have an internal Nuget repository for our own cross project code libraries. This includes licensing modules, utility and data loggers etc.

It struck me after writing the last post that the best way to manage my Typemock references was with a Nuget package, obviously not a public one, this would be for Typemock to produce. So I create one to place on our internal Nuget server that just contained the two DLLs I needed to reference (I could include more but we usually only need the core and act assert arrange assemblies).

[Update 6th Aug PM] – After playing with this today seems I need the following in my Nuget package

Lib
    Net20
         Configuration.dll
         Typemock.ArrangeActAssert.dll
         TypeMock.dlll

If you miss out the configuration.dll it all works locally on a developers PC, but you get a ‘cannot load assembly error’ when trying to run a TFS build with Typemock auto deployment. Can’t see why obviously but adding the reference (assembly to the package) is a quick fix.

 

image

IT IS IMPORANT TO NOTE that using a Nuget package here in no way alters the Typemock licensing. Your developers still each need a license, they also need to install Typemock Isolator, to be able to run the tests and your build box needs to use auto deployment. All using Nuget means is that you are now managing references in the same way for Typemock as any other Nuget managed set of assemblies. You are internally consistent, which I like.

So in theory as new versions of Typemock are released I can update my internal Nuget package allowing projects to use the version they require. It will be interesting to see how well this works in practice.