Whilst moving all our older test Hyper-V VMs into a new TFS 2012 Lab Management instance I have had to address a few problems. I already posted about the main one of cross domain communications. This post aims to list the other workaround I have used.
MTM can’t communicate with the VMs
When setting up an environment that includes existing VMs it is vital that the PC running MTM |(Lab Center) can communicate with all the VMs involved. The best indication I have found that you will not have problems is to use a simple Ping. If you are creating a SCVMM environment you need to be able to Ping the fully qualified machine name as it has been picked up by Hyper-V e.g: server1.test.local. If creating a standard environment you only need to be able to Ping the name you specify for the machine e.g: server1 or maybe server.corp.com.
If Ping fails then you can be sure that the MTM create environment verify step will also fail. The most likely reasons both are failing are
- There are DNS issues, the VM names are missing, leases have expired, they are not in the domains expected or are just plain wrong. I found the best solution for me is to edit the local hosts file on the PC running MTM. Just add the name and fully qualified name as well as the correct IP address. You should then be able Ping the VM (unless there is a firewall issue, see below). The host file is only needed on the MTM PC whist the environment is created, once the environment is setup the hosts file is not needed.
- File and print sharing needs to be opened through the firewall on the VM (control panel > firewall > allow applications through firewall)
- Missing/out of date Hyper-V extensions on the VM. This only matters if it is a SCVMM environment being created as this is how the fully qualified is found. This is best spotted in MTM as you get a error on the Machine properties tab. The fix is to reinstall the extensions via the Hyper-V Manager (Actions > Insert Integration Services Disk, and maybe run the setup on the VM if it does not start)
Can’t see a running VM in the list of available VMs
When composing an environment from running VMs one problem I had was that though a VM was running it did not appear in the list in MTM. This turned out to be due to the fact that the VM had meta data associating it with an different environment (in my case a dating back to our TFS2010 instance).
This is easy to fix, in SCVMM or Hyper-V Manager open the VM settings and make sure the name/ note field (red box below) is empty.
Once the settings are saved you will have to wait a little while before SCVMM picks up the changes and lets you copy of MTM know the VM is available.
I don’t know about your systems but historically we have had VMs running in test domains that are connected to our corporate LAN. Thus allowing our staff and external testers to access them from their development PCs or through our firewall after providing suitable test domain credentials. These test setups are great candidates for the new TFS Lab Management 2012 feature Standard environments. It does not matter if they are hosted as physical devices, or on Hyper-V or VMware.
However, the use of separate domains raises issues of cross domain authentication, irrespective of the virtualisation technology. It is always a potentially confusing area. If we want the ability to use the deployment and testing features of Lab Management, what we need to achieve is Test Agents on each VM, that talks to a Test Controller which is registered to a TFS Team Project Collection. Not too easy when spread across multiple domains.
With TSF2012 the whole process of getting agents to talk to their controller was greatly eased. Lab Management does it for you much of the time if you provide it with a corp\tfslab domain account who is a member of the Project collection test service accounts group in TFS.
The summary of the scenarios is as follows
|Scenario ||How to achieve it |
|If your test VMs are in either a SCVMM managed or standard environment but are joined to your corp domain ||Lab Management wires it all up automatically using your corp\tfslab account |
|If your test VMs are in either a SCVMM managed or standard environment that is not domain joined i.e: just in a workgroup ||Lab Management wires it all up automatically using your corp\tfslab account |
|If your test VMs are in a SCVMM managed network isolated environment ||Lab Management wires it all up automatically using your corp\tfslab account |
|If your test VMs are in either a SCVMM managed (not network isolated) or standard environment and are in their own test domain ||You have to do some work |
If like me you end up with the fourth scenario, the key is to provide a test controller within the test domain. This must be configured to talk back to TFS on the corp domain. This can all done with local machine accounts on the test controller and TFS server with matching names and passwords, what I think of as shadow accounts.
So for example, we have the following scenario of a corp domain with a DC and various TFS servers and controllers and a test domain containing three servers.
So the process to get the test agents on the test domain talking to TFS on the corp domain is as follows:
- On the TFS server (called tfsserver.corp.com in above graphic)
- Open the Control Panel > Computer Manager and create a new local user called tfslabshadow. Set the password and that the user does not need to change it on first login and that it does not expire
- In the TFS administration console add the new user tfsserver\tfslabshadow to the Project collection test service accounts group
- On a machine (called server.test.local in above graphic) within the test domain (this cab be any VM in the domain running Windows other than the DC)
- Open the Control Panel > Computer Manager and create a new local user called tfslabshadow with the same password as on the same account on the tfsserver
- Add this user to the local administrators group for that server.
- Login as this user
- Install the Visual Studio 2012 Test controller
- When the installation is complete the configuration tool will launch. Set the service to run as the tfslabshadow and register it to connect to the TFS server with this account too.
Note - When you first load the configuration tool you need to browse for the TFS server and enter its URL. If you have your shadow accounts working correctly you should not need to enter any other credentials at this point.
Note - You can enter the local user name in either the .\tfslabshadow or server\tfslabshadow format
- If you have all the settings correct then you should be able to apply the changes without any errors and the new test controller should be registered. If you get any errors they usually are fairly clear at this point when you look in the log, you probably forgot to place a user in some group somewhere.
- From a PC running Test Manager 2012 (MTM) on the corp domain
- Go into the Lab Center
- Create a new environment (can be SCVMM or Standard) containing the machines in the test domain (or open an existing environment if you have one that was not correctly configured)
- On the Advanced tab you should be able to select the new test controller server that is hosted within the test domain
- You can make any other setting changes you require (remember on the machines tab to enter the test domain login credentials, they will have defaulted to your current ones). When you are done you can select Verify. I had problem here due to DNS entries. From the PC running MTM I could ping server, but MTM was trying to communicate using the name server.test.local. To get around this I added an entry in my local host files. I have also a seen VMs that are not registered in DNS at all, again a local hosts file fixes the problem. This is only required for the initial verification and deployment/configuration once this is done the host entries can be removed if you want.
- Once verification has passed save the changes and after a short wait the environment should finish configuring itself showing no errors
So I hope I have provided a step by step to help you get around issues with cross domain testing in Lab Management. However, it is still important to remember the exceptions
- As we are using local machine accounts you cannot have the TFS server or the Test controller running on a domain controller (as a DC cannot have local machine accounts). If your environment is a single box that is a DC then you either have to setup a cross domain two way trust between test and corp or rebuild the environment as a workgroup or network isolated environment.
- The shadow account cannot have the same name as the corp\tfslab account i.e: tfslab. If you try to use the same name for the local machine and domain accounts the matching of the two local machine accounts will fails as on the TFS server end it will not be able to decide whether to use corp\tfslab or tfsserver\rfslab
For more details on this general area see MSDN
When I installed Office 2013 customer preview all seemed good, loads of new metro look Office features. However when I tried to load my previously working Visual Studio 2012RC I got the error
“The procedure entry point _atomic_fetch_sub_4 could not be located in the dynamic link library devenv.exe”.
This is a known issues with the C++ runtime and a patch was release last week, install this and all should be OK
Whilst setting up our new TFS 2012 instance I had a problem getting the build box to connect to the TFS server.
- When I started the build service (on a dedicated VM configured as a controller and single agent, connected to a TPC on the server on another VM). All appears OK, the controller and agents said they were running and state icons went green
- About 5 seconds later the state icons go red, but message says the controller and agents are still running, from past experience I know this means it is all dead.
- On the build service section a new ‘details’ link appears, but if you try to click it get a 404 error (see below)
In the windows event log (TFS/Build-Service/Operational section) I got the error
Build machine build2012 lost connectivity to message queue tfsmq://buildservicehost-2/.
Reason: HTTP code 404: Not Found
It is recorded as EventID 206 in the category MessageQueue
I tried reinstalling the build VM and checked the firewalls on the build VM and the TFS Server VM, all to no effect.
The issue turned out to be that the TFS URL I had used. I had used a TFS URL on the build service VM to connect to the TFS server that used HTTPS/SSL. As soon as I changed it to an HTTP URL the build service started to work. This was OK for me as the build VM and server VM were in the same machine room, so I did not really need SLL. I had just used it out of habit as this is what our developer PCs use.
However, if you did want to keep using SSL you need to do the following
- Open the following configuration file: C:\Program Files\Microsoft Team Foundation Server 2012\Application Tier\Message Queue\web.config
- Find a section like the bindings section below
- Alter httpTransport to say httpsTransport
<textMessageEncoding messageVersion="Soap12WSAddressing10" />
<httpTransport authenticationScheme="Ntlm" manualAddressing="true" /> <httpsTransport authenticationScheme="Ntlm" manualAddressing="true" />
- Save the file
- Recycle the IIS app pool
- Restart the build service on the build VM
Thanks to Patrick on the TFS team for helping me get to the bottom of this.
I have been finding the ‘hit the windows key and type’ means to launch desktop applications in Windows 8 quite nice. It means I get used to the same behaviour in Windows or Ubuntu to launch things, no need to remember menu locations just type a name, all very slickrun . However, I hit a problem today, I hit the windows key, typed Visual and expected to see Visual Studio 2012 and 2010, but I only saw Visual Studio 2012
But both were there yesterday!
The issue was I had install SSDT (SQL Server Data Tools), this is hosted within the Visual Studio 2010 shell and had renamed my Metro desktop Visual Studio 2010 App to Microsoft SQL Server Data Tools. If I typed this the app was found and it launched Visual Studio 2010. I then could choose whether to use SSDT or Ultimate features as you would expect. This is the same behaviour as on Windows 7, it is just on Windows 7 you would have two menu items, one for SSDT and one for VS2010 both pointing to the same place.
Now I am a creature of habit, even if it is a newly formed one, and I like to just type Vis, so this is how I got the link back into the Metro desktop, might be other ways but this is the one that worked for me
- Found the Visual Studio 2010 devenv.exe file in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE
- Right click and create a shortcut on the desktop
- Rename the new desktop shortcut to ‘Visual Studio 2010’
- Right clicked on the renamed shortcut and selected ‘pin to start’
- Deleted the desktop shortcut, it is no longer needed as it has been copied, yes I found this a bit strange too, but I do like a clean desktop so delete it I did. You don’t have to delete it if you want a desktop shortcut.
I can now press the Windows key, type Vis and I can see both Vs 2010 and 2012, and I can still type SQL and get to SSDT
At TechEd USA 2012 in the keynote and the ALM session it was announced that you no longer need an invite code to access the Azure hosted TFSPreview.com, it is now open to all.
There are no final details as yet over the pricing when it goes fully to production but they did say there will be some form of free offering going forward. We will have to wait for more details on that front.
Whilst preparing the demos for my TFS 2012 ALM session on Wednesday (still places available of you are in the Yorkshire area), I got the following error whey trying to add a new solution of sample code to TFS
Access to the path 'C:\ProgramData\Microsoft Team Foundation Local Workspaces\043519e4-72cf-4cd0-a711-f4bb8b817f30\TYPHOON;432cfb95-26d6-4a68-af26-b102c950a90d\properties.tf1' is denied.
It seems that the ..\04535… folder had been created when I was running VS2012 using ‘run as administrator’. When I tried to browse it I was prompted for UAC elevated privileges. So if I was not running VS2012 as administrator I could not access the folder. For me the solution was to just delete the folder and let VS/TFS recreate it. I had no checked out files so it did not matter. Once done all was OK
One to key an eye on
I have been working on one of our build boxes today restructuring our Surface solutions to make better of Nuget. This involved upgraded the Azure SDK on the build box to the new June release, which needed a reboot halfway through the process. After the reboot and tried a new build I got the error
TF215097: An error occurred while initializing a build for build definition \Surface\RetailApplication.Main CI: Could not load file or assembly 'System.Drawing, Version=188.8.131.52, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
Basically the build did not start at all. Various forum posts point to corrupt build template .XAML or missing assemblies. But as it was working, the assembly name is in the core framework it all seem a bit strange.
The fix was the old favourite, stop and restart the build service from with TFS Admin console on the build box. Once this was done all as fine, so I guess some rubbish was cached.
I am really proud to have been involved in the team of ALM Rangers who have SIMultaneous-SHIPped best practice guidance with Visual Studio 11 RC, which became available last night.
I am sure anyone working with Visual Studio and TFS will find the guidance of value, I have certainly learned a lot whilst helping produce the material. It has been a great experience working with a great crowd of people both inside and outside of Microsoft.