Fixing a dodgy proximity sensor on my Nokia Lumia 920

I’ve just had a really infuriating half an hour trying to figure out why I couldn’t get the keypad to appear during a call on my Lumia 920. When I took my phone away form my ear the screen stayed black. Pushing the power button made the display switch on and then immediately switch off. Power cycling and even resetting made no difference.

A close examination showed that the small round circle next to the speaker slot at the top of the phone display was full of dust. Exactly how this happened I am not sure – you’d expect that bit to be sealed, wouldn’t you.

Five minutes with a compress air cannisted blowing into the speaker slot and  headphone jack socket later I had managed to blow the gunk out of the proximity sensor an as if by magic, the keypad works like it should again.

Fixing Lab Manager environments with brute force

As you’ve probably seen, our Lab Manager/SCVMM 2008 R2 upgrade to SCVMM 2012 SP1 was not the smoothest in the world. The end result was a clean lab manager and SCVMM install, but a raft of virtual machines that had previously been part of environments.

In tidying up, Richard and I learned a few things about picking apart VMs that were once part of an environment such that a new environment could be built form the wreckage.

There are two approaches to getting what you need: Firstly, you could simply compose the existing virtual machines into a new environment without storing in, and deploying from SCVMM. Secondly, you could pull the VMs back into SCVMM such that you could build a new environment.

Don’t forget to fix the networks

If you want to use the running VMs you will need to make sure that you have recreated any private network generated by Lab Manager. These are all helpfully listed in the XML configuration file of the VMs. They are normally named Lab_<GUID>_NI so are easy to find in the file. On the hyper-v host, using hyper-v manager you will need to create a new private virtual network with the name you just found. You should then attach the synthetic network adapter of your VMs (not the legacy network adapter) to this private network. If you have a DC, and you told Lab Manager it was a DC, then you are likely to need to hook its legacy adapter to the private network as well.

Scenario 1: Pull existing machines into an environment

The big problem you are likely to find here is that whilst you have imported the VMs onto your hyper-v server and SCVMM can see the machines just fine, Lab Manager refuses to show them to you.

The reason for this is that Lab Manager believes the VMs are currently part of an environment, just not one it currently has. It therefore hides the VMs from you. It turns out that this is pretty straightforward to fix. In the notes field of the running VM settings you will see a block of XML. That is read by Lab Manager to identify the VMs in environments. Simply delete that xml and the machine will now show up in Lab Manager as being available to compose into an enviroment.

Scenario 2: Get the VMs back into SCVMM to build a new environment and deploy it.

This is a trickier situation and one which needs to follow the steps I talked about in my previous post about building VMs for Lab Manager.

The problem here is not just the XML, but that Lab Manager has probably mangled the hardware settings of the VM as well. You will need to tidy each VM before storing it in SCVMM ready for Lab Manager:

  • Remove the XML from the notes field.
  • Remove the legacy network adapter.
  • Configure the network adapter within windows to use an IP address and DNS handed to it from DHCP.
  • Delete any snapshots.
  • Make sure you cleanly shut down the VM – don’t save it!

If you follow those steps you can store the VMs back into SCVMM then build a new environment from the stored VMs. If this still gives you trouble then you should export the VMs from hyper-v, reimport them as a copy to get a new unique ID and then push those into SCVMM.

So far this has worked just fine for us with Richard working his magic in Lab Manager whilst I fix up VMs in hyper-v and SCVMM.

Things to remember when building virtual machines for a lab manager environment

As you will have read on both mine and Richard’s blogs, we have recently upgraded our Lab environment and it wasn’t the smoothest of processes.

However, as always it has been a learning experience and this post is all about building VM environments that can be sucked into Lab and turned into a Lab environment that can be pushed out multiple times.

Note:  This article is all about virtual machines running on Windows Server 2012 that may have been built on Windows 8 and are managed by SCVMM 2012 SP1 and Lab Manager/TFS 2012 CU1. Whilst the things I have found in terms of prepping VMs for Lab Manager are likely to be common to older versions, your mileage may vary.

Approaches to building environments

There are a number of approaches to building multi-machine environments that developers can effectively self-serve as required:

  • The ALM Rangers have a VM Factory project on Codeplex which aims to deliver scripted build-from-scratch on demand.
  • SCVMM has templates for machines that are part-built and stored after running sysprep. Orchestrator can then be used to deploy templates and run scripts to wire them together.
  • Lab Manager allows you to take running VMs and group them together into an environment. It stores all the VMs in SCVMM and when requested, generates new VMs by copying the ones from the library.

Trouble at ‘mill

There are also a number of problems in this space that must balance the needs of IT pros with the needs of developers:

  • Developers are an impatient bunch. They will request the environment at the last minute and need it deployed as quickly as possible. This doesn’t necessarily work well with complete bare-metal scripted approaches.
  • Developers would also prefer some consistency – if they have to remember one set of credentials it’s probably too much. Use different accounts and passwords and machine names for all your environments and it can get trick.
  • Developers love to use the Lab Manager and Test Manager tooling. This delivers great integration with the Team Project in Team Foundation Server.
  • IT Pros need to deal with issues caused by multiple machines with the same identities sharing a network. This is especially true of domain controllers.
  • IT pros would like to keep the number of snapshots (SCVMM checkpoints) to a minimum, especially when memory images are in play as well.
  • IT pros would prefer the environments used by the developers to match the way things are installed in the real world. This is less critical for the actual development environment but really important when it comes to testing. This tends to lead to requirements for additional DNS entries and multiple user accounts. This is especially true if you are building SharePoint farms properly.

How IT pros would do it…

Let’s use one of our environments as an example. We have a four server set:

  1. The Domain Controller is acting as DNS and also runs SQL Server. It doesn’t have to do the latter, but we were trying to avoid an additional machine. Reporting services and analysis services are installed and reporting services is listening on a host header with a DNS CNAME entry for it.
  2. An IIS server allows for deployment of custom web apps.
  3. A CRM 2011 server is using the SQL instance on the DC for its database and reporting services functions. The CRM system itself is published on another host header.
  4. A SharePoint 2010 server is using the SQL instance as well. It has separate web applications for intranet and mysites and each is published on a separate host header.

If we were building this without lab manager then we would give the machines two NICs. One would be on our network and the other on a private network. On the DC we unbind the nasty windows protocols from our network. Remote desktop is enabled on all machines for the devs to access it.

Lab Manager complicates matters however. It is clever enough to understand that we might need to keep DC traffic away from our network and has a mechanism to deliver this, called Network Isolation. How it actually goes about that is somewhat problematic, however.

Basically, Lab Manager wants to control all the networking in the new environment. To do that it adds new network adapters to the VMs and it uses those new adapters to connect to the main network. It expects a single adapter to be in the original VM, which it connects to a new private network that it creates.

Did I mention that IT pros hate GUIDs? Lab Manager loves them. Whilst I can appreciate that it’s the best way to generate unique names for networks and VMs it’s a complete pain to manage.

Anyway, it’s really, really easy to confuse Lab Manager. Sadly, if the IT pro builds what they consider to be a sensible rig, that will confuse Lab Manager right away. The answer is that we need to build our environment the right way and then trim it in readiness for the Lab Manager bit.

Building carefully

I would build my environment on my Windows 8 box. I create a private network and use that as a backbone for the environment. I assign fixed IP addresses to each server on that network. Each server uses the DC as its DNS. That way I can ensure everything works during build. I also add a second NIC to each box that is connected to my main network. I carefully set the protocols that are bound to that NIC. Both of those network adapters are what lab manager calls ‘synthetic’ – they are the native virtualised adapter hyper-v uses, not the emulated legacy adapter.

I carefully make sure that all host header-required DNS entries are created as CNAMEs that point to the host record for the server I need. This is important because all the IP addresses will change when Lab Manager takes over.

I may make snapshots as I build so I can move back in time if something goes wrong.

When built, I will probably store my working rig so I can come back to it later. I will then change the rig, effectively breaking it, in order to work with Lab Manager.

The Lab Manager readiness checklist

  • Lab Manager will fail if there is more than a single network adapter. It must be a synthetic adapter, not a legacy one. The adapter should be set to use DHCP for all its configuration – address and DNS.
  • Install, but do not configure the Visual Studio Test Agent before you shut the machines down. We’ve seen Lab fail to install this many times, but if it’s already there it normally configures it just fine.
  • Delete all the snapshots for the virtual machine. Whilst Lab Manager can cope with snapshots, both it and SCVMM get confused when machines are imported with different configurations in the snapshots from the final configuration. It will stop Lab Manager in its tracks.
  • Make sure there is nothing in the notes field of the VM settings. Both Lab Manager and SCVMM shove crap in there to track the VM. If anybody from either team is listening, this is really annoying and gets in the way of putting notes about the rigs in there. Lab Manager shoves XML in there to describe the environment.
  • Make sure there are no saved states. Your machines need to be shut down properly when you finish, before importing into SCVMM. The machines need to boot clean or they will get very confused and Lab Manager may struggle to make the hardware changes.
  • Make sure you export the machines – don’t just copy the folder structure, even though its much easier to do.

Next, get it into SCVMM

There is a good reason to export the VMs. It turns out that SCVMM latches on to the unique identifier of the VM (logical, if you think about it). The snag with this is that you can end up with VMs ‘hiding’. If I copy my set of four VMs to an SCVMM library share I can’t have a copy running as well. Unless you do everything through SCVMM (and for many, many reasons I’m just not going to!) you can end up with confusion. This gets really irritating when you have multiple library shares because if you have copies of a VM in more than one library, one will not appear in the lists in SCVMM. There are good reasons why I might want to store those multiple copies.

Back to the plot. SCVMM won’t let us import a VM. We can construct a new one from a VHD but I have yet to find a way to import a VM (why on earth not? If I’ve missed something please tell me!). So, we need to import our VMs onto a server managed by SCVMM. We have a small box for just this purpose – it’s not managed by Lab Manager but is managed by our SCVMM so I can pull machines from it into the library.

Import the VMs onto your host using Hyper-V manager. Make sure you create sensible folder structures and names for them all. Once they are imported make sure you close hyper-v manager. I have seen SCVMM fail to delete VM folders correctly because hyper-v manager seems to have the VHD open for some reason.

In SCVMM, refresh the host you’ve just imported the VMs to. You should see them in the VM list. I tend to refresh the VMs too, but that’s just me. Start the VMs and let SCVMM get all the information from them like host name etc. I usually leave them for a few minutes, then shut them down cleanly from the SCVMM console.

Now we know SCVMM is happy with them, we can store the VMs in the SCVMM library that Lab Manager uses. You should see them wink out existence on the VM host once the store is complete.

Create the Lab environment

At this point the IT guys can hand over to the people managing labs. In our case that’s Richard. He can now compose a new environment within Lab Manager and pull the VMs I have just stored into his lab. He tells the lab that it needs to run with network isolation and identifies the DC.

What Lab Manager will then do is deploy a new VM through SCVMM using the ones I built as a source. It will then modify the hardware configuration of the VMs, adding a legacy network adapter. It also configures the MAC address of the existing synthetic adapter to be static.

A new private virtual network is created on the target VM host. It’s really hard to manage these through SCVMM so if Lab ever leaves them hanging around I delete them using hyper-v manager. The synthetic adapters in the VMs are connected to the private network while the legacy adapters are connected to the main network.

Exactly why they do it this way I’m not sure. Other than needing legacy adapters for PXE boot (which this isn’t doing) I can’t see why we’re using legacy adapters. I am assuming the visual studio team selected them for a good reason, probably around issuing commands to the VMs, but I don’t know why.

When the environment is started, Lab will assign static IP addresses to the NICs attached to the private network. All ours seem to be 192.168.23.x addresses. It will also set the DNS address to be that which has been assigned to the DC in the lab. The legacy adapters will be set to DHCP for all settings. The end result is a DC that is only connected to the private network and all other machines connected to both private and main networks.

Once the environment is up, Lab Manager should configure the test agent and you’re off. The new lab environment can then be stored in such a way as to allow multiple copies to be deployed as required by the devs.