BM-Bloggers

The blogs of Black Marble staff

Remote Mounting an ISO Image from Hyper-V

Building a Hyper-V virtual machine from scratch almost always seems to involve mounting an ISO image at some point during the installation process. I suspect that like us, many other organisations already have a network location in which we store ISO images. The ability to mount an ISO image from our usual network location saves us having to copy the ISO images to the local Hyper-V servers.

The ability to remote mount an ISO image requires that a couple of configuration changes are made. Attempting to remote mount an ISO image without making the configuration changes usually results in an err along the lines of

Inserting the disk failed

Failed to add device ‘Microsoft Virtual CD/DVD Disk’

The file ‘\\RemoteServer\Share\ISO_Image.iso’ does not have the required security settings. Error: ‘General access denied error’

There are two configuration changes that need to be made. The first is ensuring that the Hyper-V hosts can access the share itself that contains the ISO images. As a workaround, you can always grant ‘Everyone’ read access to the share. If you want to control access to individual servers, you need to specify the Active Directory computer object for each server you want to give access to.

The second configuration change that is required to to grant Constrained Delegation on the virtual host objects in Active Directory:

  • Log onto a domain controller and open Active Directory Users and Computers from the Administrative Tools menu, or open the remote administration Active Directory tools from another server or client
  • Locate the Hyper-V host computer object
  • Right-click the object and Select Properties from the context menu
  • Select the Delegation tab
  • Select the ‘Trust this computer for delegation to specified services only’ radio button and then the ‘Use any authentication protocol’ radio button
  • Click the ‘Add’ button:
    Active Directory computer object properties
  • The ‘Add Services’ dialog will open:
    Add Services dialog
  • Click the ‘Users or Computers’ button and add the remote server hosting the ISO images. Click OK
  • Select the ‘cifs’ service type from the list shown:
    Add Services dialog, cifs service selected
  • Click OK to close the dialog, the computer object properties should look like the following:
    Active Directory computer object properties, cifs service added
  • Click OK to apply the change
  • Repeat the above steps for any other Hyper-V hosts servers.

You should now be able to remote mount ISO images from the server specified.

Moving Environments between TPCs when using TFS Lab Management

Background

One area I think can be found confusing in TFS Lab Management is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.

image

 

This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.

This problem effects companies like ourselves as we have many TPCs because we tend to have one per client, an arrangement not that uncommon for consultancies.

It is not just in Lab Management this is an issue for us. The isolated nature of TPCs, a great advantage for client security, has caused us to have an ever growing number of Build Controllers and Test Controllers which were are regularly reassigning to whichever are our active TPCs. Luckily multiple the Build Controller can be run on the same VM (I discussed this unsupported hack here), but unfortunate there is no similar workaround for Test Controllers.

MTM is not your friend when  storing environments for use beyond the current TP

What I want to discuss in this post is how, when you have a working environment in one TP you can get it into another TP with as little fuss as possible.

Naively you would think that you use the Store in Library option within MTM that is available for a stopped environment.

 image

This does store the environment on the SCVMM Library, but it is only available for the TP that is was stored from. It is stored in the A1 silo in the SCVMM Library. Now you might ask why, the SCVMM Library is just a share, so anything in it should be available to all? But it turns out it is not just a share. It is true the files are on a UNC share, you can see the stored environments as a number of Lab_[guid] folders, but there is also a DB that stores meta data, this is the problem. This meta data associates the stored environment with a given TP.

The same is true if you choose to just store a single VM from within MTM whether you choose to store it as a VM or a template.

Why is this important you might ask? Well it is all well and good you can build your environment from VMs and templates in the SCVMM Library, but these will not be fully configured for your needs. You will build the environment, making sure TFS agents are in place, maybe putting extra applications, tools or test data on system. It is all work you don’t want to have to repeat for what is in effect the same environment in another TP or TPC. This is a problem we see all the time. We do SharePoint development so want a standard environment (couple of load balanced servers and a client) we can use for many client projects in different TPCs  (Ok VM factory can help, but this is not my point here).

A workaround of sorts

The only way I have found to ease this problem is when I have a fully configured environment to clone the key VMs (the servers) into the SCVMM Library using SCVMM NOT MTM

  1. Using MTM stop the environment you wish to work with.
  2. Identify the VM you wish to store, you need its Lab name. This can be found in MTM if you connect to the lab and check the system info for the VM

    image
  3. Load SCVMM admin console, select Virtual Machines tab and find the correct VM

    image
  4. Right click on the VM and select Clone
  5. Give the VM as new meaningful name e.g. ‘Fully configured SP2010 Server’
  6. Accept the hardware configuration (unless you wish to change it for some reason)
  7. IMPORTANT On the destination tab select the to ‘store the virtual machine in the library’. This appears to be the only means to get a VM into the library such that it can be imported into any TPC/TP.

    image
  8. Next select the library share to use
  9. And let the wizard complete.
  10. You should now have a VM in the SCVMM Library that can be imported into new environments.

 

You do have to at this point recreate the environment in your new TP, but at least the servers you import into this environment are configured OK. If for example you have a pair of SP2010 servers, a DC and a NLB, as long as you drop them into a new isolated environment they should just leap into life as they did before. You should not have to do any extra re-configuration.

The same technique could be used for workstation VMs, but it might be as quick to just use template (sysprep’d) clients. You just need to take a view on this for your environment requirements

Debugging CodedUi Tests when launching test as a different user

If you are working with CodedUI tests in Visual Studio you sometimes get unexpected results, such as the wrong field be selected in replays. When trying to work out what has happened the logging features are really useful. These are probably already switched on, but you can check by following the details in this post.

Assuming you make no logging level changes from the default, if you look in the

      %Temp%\UITestLogs\LastRun

you should see a log file containing warning level messages in the form

Playback - {1} [SUCCESS] SendKeys "^{HOME}" - "[MSAA, VisibleOnly]ControlType='Edit'"

E, 11576, 113, 2011/12/20, 09:55:00.344, 717559875878, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window

W, 11576, 113, 2011/12/20, 09:55:00.439, 717560081047, QTAgent32.exe, Playback - {2} [SUCCESS] SendKeys "^+{END}" - "[MSAA, VisibleOnly]ControlType='Edit'"

E, 11576, 113, 2011/12/20, 09:55:00.440, 717560081487, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window

W, 11576, 113, 2011/12/20, 09:55:00.485, 717560179336, QTAgent32.exe, Playback - {3} [SUCCESS] SendKeys "{DELETE}" - "[MSAA, VisibleOnly]ControlType='Edit'"

A common problem with coded UI tests can be who you are running the test as. It is possible to launch the application user test using the following command at the start of a test

     ApplicationUnderTest.Launch(“c:\my.exe”, 
                              “c:\my.exe”, 
                             ””, 
                             ”username”, 
                            securepassword, 
                           ”domain”)

 

I have found that this launch mechanism it can cause problems with fields not being found in the CodeUI test unless you run Visual Studio as administrator (using the right click run as Administrator in Windows). This is down to who is allowed to access who's UI thread in Windows if a user is not an administrator.

So if you want to use ApplicationUnderTest.Launch and change the user for an CodedUI test, best the process running the test is an administrator.

Updating the Time Service on Windows Home Server 2011

Those of you who know me or Rik will know that we’re both very keen on Windows Home Server. I’ve seen some time related issues with my Home Server recently, with message in the event log telling me that the time of the system (which seems to drift more than I’d like) could not be updated due to a variety of issues.

Running a manual resync of the server time (open a command prompt and type w32tm /resync) gave the following error:

Sending resync command to local computer
The computer did not resync because the required time change was too big.

I was a little confused by this, as the drift was only a minute or so from what the other PCs in the house were showing. Checking the time synchronisation configuration on the server (open a command prompt and type w32tm /query /configuration) didn’t show any particular surprises except that the MaxNegPhaseCorrection and MaxPosPhaseCorrection were both set to 3600 (one hour) rather than the more normal 54000 (15 hours), however checking the time zone of the machine did; the server was configured to use PST! I could have sworn that I’d updated the time zone when I built the server, but obviously not…

Changing the time zone of the server sorted out the correct time, however running a synchronisation from the command prompt still gave an error:

Sending resync command to local computer
The computer did not resync because no time data was available.

There are a number of other switches that can be used with the w32tm command, one of which is /rediscover, which redetects the network configuration and rediscovers network sources. With the addition of this flag to the command (w32tm /resync /rediscover) gave me a successful time synchronisation:

Sending resync command to local computer
The command completed successfully.

My Home Server is now running on the correct time!

DevOps are testers best placed to fill this role?

DevOps seems to be the new buzz role in the industry at present. People who can bridge the gap between the worlds of development and IT pros. Given my career history this could be a description of  the path I took. I have done both, and now sit in the middle covering ALM consultancy where I work with both roles. You can’t avoid a bit of development and a bit of IT pro work when installing and configuring TFS with some automated build and deployment.

The growth of DevOps is an interesting move because of late I have seen the gap between IT Pros and developers grow. Many developers seem to have less and less understanding of operational issues as times go on. I fear this is a due to the greater levels of abstractions that new development tools cause. This is only going to get worse was we move into the cloud, why does a developer need to care about Ops issues, AppFabric does that for them – doesn’t it?

In my view this is dangerous, we all need at least a working knowledge of what underpins the technology we use. Maybe this should hint at good subjects for informal in-house training, why not get your developers to give intro training to the IT pros and vice versa? Or encourage people to listen to podcasts on the other roles subjects such as Dot Net Rocks (a dev podcast) and Run As Radio (an IT pro podcast). It was always a nice feature of the TechEd conference that it had a dev and IT pro track, so if the fancy took you could hear about technology from the view of the the other role.

However, these are longer term solutions, it is all well and good promoting these but in the short term who is best placed to bridge this gap now?

I think the answer could be testers, I wrote a post a while ago that it was great to be a tester as you got to work with a wide range of technologies, isn’t this just an extension of this role. DevOps needs a working understanding of development and operations, as well as a good knowledge of deployment and build technologies. All aspects of the tester role, assuming your organisation considers a tester not to be a person who just ticks boxes on a check list, but a software development engineer working in test.

This is not to say that DevOps and testers are the same, just that there is some commonality and so you may have more skills in house than you thought you did. DevOps is not new, someone was doing the work already, they just did not historically give it that name (or probably any name)

When you try to run a test in MTM you get a dialog ‘Object reference not set to an instance of an object’

When trying to run a newly created manual test in MTM I got the error dialog

‘You cannot run the selected tests, Object reference not set to an instance of an object’.

image

 

On checking the windows event log I saw

Detailed Message: TF30065: An unhandled exception occurred.

Web Request Details Url: http://……/TestManagement/v1.0/TestResultsEx.asmx

So not really that much help in diagnosing the problem!

Turns out the problem was I had been editing the test case work item type. Though it had saved/imported without any errors (it is validated during these processes) something was wrong with it. I suspect to do with filtering the list of users in the ‘assigned to’ field as this is what I last remember editing, but I might be wrong, it was on a demo TFS instance I have not used for a while.

The solution was to revert the test case work item type back to a known good version and recreate the failing test(s). Its seems once a test was created from the bad template there was nothing you could do about fixing it.

Once this was done MTM ran the tests without any issues.

When I have some time I will do an XML compare of the exported good and bad work item types to see what the problem really was.