How to waste time with Lab Management – Missing the obvious that MTM points to a Team Project

I posted a while ago about common confusion I had seen with Lab Management. We I have recently managed to get myself completely confused whilst working with Lab Management. It turns out the issue was so obvious I managed to miss it for hours, but as usual I learnt a good deal whilst trying to troubleshoot my stupidity.

I have a Lab Management system linked up to a a Team Project Collection (TPC). In this TPC there is a Team Project used for SharePoint development and on my Lab Management system I have an environment to allow testing of the SharePoint products. I setup a new Team Project in this TPC. In the new project I created a new MVC solution and created an automated build, which all worked fine.

I wanted to deploy this MVC application to a web server in an environment in my Lab Management system. So I created a basic web server environment and deployed it onto a host in my Lab. I then tried to create a Lab Management workflow build to deploy to the newly created environment. However the combo to select the environment was empty when I ran the wizard.

image

I was confused, I knew I had an environment on the Lab Management system, I had just created it in Test Manager (MTM) I could attach to it in MTM or Remote Desktop.

So I checked again

  • that the TPC was correctly registered with Lab Management
  • the environment was running in MTM
  • that all the configuration for the Team Project looked OK via TFSLABCONFIG.EXE

All to no avail. Then after far too long I realised I was not looking at the same things in MTM and Visual Studio.

When you are in the Lab Center (green border) pages of MTM there is no obvious indication of the Team Project you are using. I had forgotten this and got it into my head I was looking at all the environments and libraries for my whole TPC. THIS IS NOT THE CASE. The environments and libraries are Team Project specific and not TPC specific. I had created my new environment in my SharePoint team project not in my new MVC one.

To swap Team Project I needed to change to the Testing Centre (blue border) view and change the Test Plan (top right)

image

to get the dialog to change the Team Project.

image

Once this is done you can go to the Lab Center again and you see the environments for the selected Team Project. This was where I needed to create my environment.

At this point you will notice that all the templates and VMs you imported into the other Team project are not in this one. You have to reimport them from SCVMM and then create the environments in MTM for that Team Project

To the technical tip here is remember that Lab Center in MTM is Team Project specific – NOT Team Project Collection specific, but it does its best to not remind you of this fact so it is easy to forget.

Common confusion I have seen with Visual Studio 2010 Lab Management

With any new product there can be some confusion over the exact range and scope of features, this is just as true for VS2010 Lab Management as any other. In fact given the number of moving parts (infrastructure you need in place to get it running) it can be more confusing than average. In this post I will cover the questions I have seen most often.

What does ‘Network Isolation’ really mean?

The biggest confusion I have seen is that Lab Management allows you to run a number of copies of a given test environment, each instance of an environment is ‘network isolated’ from the others. This means that each instance of the environment can have server VMs named the same without errors being generated. WHAT IS DOES NOT MEAN is that each of these environments are fully isolated from your corporate or test LAN. Think about it, how could this work? I am sad to say it there is still no shipment date for Microsoft Magic Pixie Net (MMPN), until this is available then we will still need a logical connection to any virtual machine under test else we cannot control/monitor it.

So what does ‘Network Isolation’ actually mean? Well it basically means Lab Manager will add a second network card to each VM in your environment (with the exception of domain controllers, I will come back to that). These secondary connections are the way you usually manage the VMs in the environment, so you end up with something like the following

image

Lab Manager creates the 192.168.23.x virtual LAN which all the VMs in the environment connect to. If you want to change the IP address range this is set in the TFS administration console, but I suspect needing to change this will be rare.

If the PCs in your environment are in a workgroup the is no more to do, but if you have a domain within your environment (i.e. you included a test domain controller VM in your environment, as shown above) you also need to tell the Lab Management environment which server is the domain controller. THIS IS VERY IMPORTANT. This is done in the Visual Studio 2010 Test Manager application where you setup the environment.

When all this is done and the environment is deployed, a second LAN card is added to all VMs in the environment (with the exception of the domain controller you told it about, if present). These LAN cards are connected to the corporate LAN and an IP address is provided using your corporate LAN DHCP server and assign a name in the form something like LAB[Guid].corpdomain.com (you can alter this domain name to something like LAB[Guid].test.corpdomain.com if you want in TFS administration console). This second LAN is a special connection in that the VMs NETBIOS names are not broadcast over it onto the corporate LAN, thus allowing multiple copies of the ‘network isolated’ to be run. In each case each VM will have a unique name on the corporate LAN, but the original names within the test (192.168.x.x) environment.

Other than blocking NETBIOS, the ‘special connection’ is in no other way restricted. So any of the test VMs can use their own connection to the corporate LAN to access any corporate (or internet resources) such as patch update servers. The only requirement will be to login to the corporate domain if authentication is required, remember on the test environment you will be logged into the test domain or local workgroup.

I mentioned that the test domain controller is not connected to the corporate LAN. This is to make sure corporate users don’t try to authenticate against it by mistake and to stop different copies of the test domain controller trying to sync.

All clear? So ‘network isolated’ does not mean fully isolated, but the ability to have multiple copies of the same environment running at the same time with the magic done behind the scenes auto-magically by Lab Management. Maybe not the best piece of feature naming in the world!

So how does a tester actually connect to the test VMs from their PC?

Well obviously they don’t use a magic MMPN connection, there has to be a valid logical connection. There are actually two possible answers here; I suspect the most common will be via remote desktop straight to the guest test VMs, this will be via the LAB[Guid].corpdomain.com name. You might be thinking how do I know these IDs, well you can get them from the Test Manager application looking at VMs system info in any running environment. Because you can look them up in this way, a tester can either use the Windows RDP application itself or more probably just connect to the VMs from within Test Manager where it will use RDP behind the scenes.

The other option is to use what is called a host connection. This is when Test Manager connects to the test VMs via the Hyper-V host. For this to work the tester needs suitable Hyper-V rights and the correct tools on their local PC, not just the Test Manager. This could also be achieved using the Hyper-V manager or SCVMM console. Host mode is the way you would use to connect to a test domain controller that has no direct connection to a corporate LAN.

The choice of connection and tool will depend on what the tester is trying to do. I would expect Test Manager to be the tool of choice in most cases.

Do I need Network Isolation – is there another option?

This all depends on what you want to do, there are good description of the possible architectures in Lab Management documentation. If you don’t think ‘network isolation’ as described above is right for you the only other option that can provide similar environment separation is to not run them ‘network isolate’ but to provided the environment with a single explicit connection to the corporate LAN via a firewall such as TMG allow there connection.

This goes without saying is more complex than using the standard ‘network isolated’ model built into Lab Management, so make sure it is really worth the effort before starting down this route.

What agents do I need to install?

There are a number of agents involved in Lab Management, these allow network isolation management, deployment and testing. The ones you need depend on what you are trying to do. If you want all the feature, not unsurprisingly you need them all. If this is what you want to do then use the VMPrep tool, it makes life easier. If you don’t want it all (and it might be easier to just install all of them as standard) you can choose.

If you want to gather test data you need the test agent, and you want to deploy code you need the lab workflow agent. The less obvious one is that for ‘network isolation’ you need the Lab Agent installed, it is though this agent that network isolation LAN is configured.

Any other limitations I might have missed?

The most obvious is that many companies will use failover clustering and a SAN to make a resilient Hyper-V cluster. Unfortunately technology his is not currently supported by Lab Management. This is easy to miss as it is only referred to once in the documentation to my knowledge in an FAQ section.

The effect of this is to not allow shared SAN storage between any Hyper-V hosts or more importantly between the VMM Library and the Hyper-V hosts. This means that all deployment of environments has to be over the LAN, the faster SAN to SAN operations cannot be used as these need clustering.

I suppose there is also the limitation of no clustering that you cannot hot migrate environments around between Hyper-V hosts, but I don’t see this as much of an issue, these are meant to be lab test environments, not live production high resilience VMs.

This is a good reason to make sure that you separate you production Hyper-V hosts from your test ones,. Make the production servers a failover cluster and the test one just a hosts group. Let Lab Manager work out which server in the host group (assuming there is more than one) to place the environment on.

 

 

So I hope that helps a bit. I am sure I will find more common question, I will post about them as they emerge.

Cannot run CodeUI tests in Lab Management – getting a ’Build directory of the test run is not specified or does not exist’

Interesting user too stupid error today whist adding some CodeUI tests to a Lab Management deployment scenario.

I added the Test Case and associated it with Coded UI test in Visual Studio

image

I made sure my deployment build had the tests selected

image

I then ran my Lab Deployment build, but got the error

Build directory of the test run is not specified or does not exist.

This normally means the test VM cannot see the share containing the build. I checked the agent login on the test VM could view the drop location, that was OK, but when I looked for the assembly containing my coded UI tests was just not there.

Then I remembered……..

The Lab build can take loads of snapshots and do a sub-build of the actual product. This all very good for production scenarios, but when you are learning about Lab Management or debugging scripts it can be really slow. To speed up the process I had told my Deploy build to not take snapshots and the use the last compile/build drop it could find. I had just forgotten to rebuild my application on the build server after I had added the coded UI tests. So I rebuild that and tried again, but I got the same problem.

It turns out that though I was missing the assembly the error was before it was required. The true real error was not who the various agents were running as, but the account the test controller was running as. The key was to check the test run log. This can be accessed from the Test Run results (I seemed to have a blind spot looking for these result)

image

This showed problem, I had selected the default ‘Network Service’ account for the test controller and had not granted it rights to the drop location.

image

I changed the account to my tfs210lab account as used by the agents and all was OK.

image

"Program too big to fit in memory" when installing a TFS 2010 Test Controller

Just spent a while battling a problem whilst install the TFS 2010 Test Controller. When I launched the install setup program off the .ISO  I could select the Test Controller installer, but then a command prompt flashed up and exited with no obvious error. If I went into the TestControllers directory on the mounted .ISO and ran the setup from a command prompt I saw the error "program too big to fit in memory".

As the box I was trying to use only had 1Gb of memory (below the recommended minimum), I upped it to 2Gb and then to 4Gb but still got the same error.

Turns out the problem was a corrupt .ISO once I had downloaded it again, and dropped by target VM to 2Gb of memory all was fine.

Visual Studio 2010 Lab Management released announced

In the VSLive! keynote Microsoft made announcements about Lab Management, it will be RTM’d later this month and best of all it will be included as part of the benefits of the Visual Studio 2010 Ultimate with MSDN and Visual Studio Test Professional 2010 with MSDN SKUs. You can read more detail on Brian Keller’s blog

I think this is a great move on licensing, we had expect it to be purchasable addition to Visual Studio. With this change it now is consistent with TFS i.e. if you have the right SKU of Visual Studio and MSDN you get the feature. This greatly removes the barrier to entry for this technology.

I look forward to have a forthright discussion with our IT manager over Hyper-V cluster resources in the near future