But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Our upgrade to TFS 2012.2 has worked OK

I have mentioned in past posts the issues we had doing our first quarterly update for TFS 2012. Well today we had scheduled our upgrade to 2012.2 and I am please to say it all seems to have worked.

Unlike the last upgrade, this time we were doing nothing complex such as moving DB tier SQL instances; so it was a straight upgrade of a dual tier TFS 2012.1 instance with the DB being stored on a SQL2012 Availability Group (in previous updates you had to remove the DBs from the availability group for the update, with update 2 this is no longer required).

So we ran the EXE, all the files were copied on OK. So when we got to the verify stage of the wizard we had expected no issues, but the tool reported problems with the servers HTTPS Url. A quick check showed the issue was the server had the TFS ODATA service bound to HTTP on port 433, but using a different IP address to that used by TFS itself. As soon as this web site was stopped the wizard passed verification and the upgrade proceeded without an errors.

So it would seem that the verification does a rather basic check to see if port 443 is used on any IP address on the server, not just the ones being used TFS as identified via either IP address or host header bindings.

The only other thing we have had to do is upgrade Tiago’s Team Foundation Task Board Enhancer, without the upgrade the previous version of this extension did not work.

So not too bad an experience.

Error TF400129: Verifying that the team project collection has space for new system fields when upgrading TFS to 2012.2

Whist testing an upgrade of TFS 2010 to TFS 2012.2 I was getting a number of verification errors in the TFS configuration upgrade wizard. They were all TF400129 based such as

TF400129: Verifying that the team project collection has space for new system fields

but also mention models and schema.

A quick search threw up this thread on the subject, but on checking the DB tables I could see my problem was all together more basic. The thread talked of TPCs in incorrect states. In my case I had been provided with an empty DB, so TFS could find not tables at all. So I suppose the error message was a bit too specific, should have been ‘DB is empty!!!!’ error. Once I got a valid file backup restored for the TPC in question all was ok.

A bit more digging showed that I could also see an error if I issued the command

tfsconfig remapdbs /sqlinstances:TFS1 /databaseName:TFS1;Tfs_Configuration

As this too reported it could not find a DB it was expecting.

So the tip is make sure you really have the Dbs restored you think you have.

What machine name is being used when you compose an environment from running VMs in Lab Management?

This is a follow up to my older post on a similar subject 

When composing a new Lab Environment from running VMs the PC you are running MTM on needs to be able to connect to the running VMs. It does this using IP so at the most basic level you need to be able to resolve the name of the VM to an IP address.

If your VM is connected to the same LAN as your PC, but not in the same domain the chances are that DNS name resolution will not work. I find the best option is to put a temporary entry in your local hosts file, keeping it for just as long as the creation process takes.

But what should this entry be? Should it be the name of the VM as it appears in the MTM new environment wizard?

Turns out the answer is no, it needs to be the name as appears in the SC-VMM console

image

So the hosts table contains the correct entries for the FQDN (watch out for typo’s here, a mistype IP address only adds to the confusion) e.g.

10.10.10.100 wyfrswin7.wyfrs.local
10.10.10.45 shamrockbay.wyfrs.local

Once all this is set then just follow the process in my older post to enable the connection so the new environment wizard can verify OK.

Remember the firewall on the VMs may also be an issue. Just for the period of the environment creation I often disable this.

Also Wireshark is your friend, it will show if the machine you think is responding is the one you really want.

Lab Management with SCVMM 2012 and /labenvironmentplacementpolicy:aggressive

I did a post a year or so ago about setting up TFS Labs and mentioned command

C:\Program Files\Microsoft Team Foundation Server 2010\Tools>tfsconfig lab /hostgroup /collectionName:myTpc  ​/labenvironmentplacementpolicy:aggressive /edit /Name:"My hosts group"

This can be used tell TFS Lab Management to place VMs using any memory that is assigned stopped environments. This allowed a degree of over commitment of resources.

As I discovered today this command only works for SCVMM 2010 based system. if you try it you just get a message saying not support on SCVMM 2012. There appears to be no equivalent for 2012.

However you can use features such as dynamic memory with in SCVMM 2012 so all is not lost

TF900548 when using my Typemock 2012 TFS custom build activity

Using the Typemock TFS 2012 Build activity I created I had started seen the error

TF900548: An error occurred publishing the Visual Studio test results. Details: 'The following id must have a positive value: testRunId.'

I thought it might be down to having patched our build boxes to TFS 2012 Update 1, maybe it needed to be rebuild due to some dependency? However, on trying the build activity on my development TFS server I found it ran fine.

I made sure I had the same custom assemblies and Typemock autorun folder and build definition on both systems, I did, so it was not that.

Next I tried running the build but targeting an agent not on the same VM as the build controller. This worked, so it seems I have a build controller issues. So I ran Windows update to make sure the OS was patched it to date, it applied a few patches and rebooted. And all was OK my test ran gain.

It does seem that for many build issues the standard switch it off and back on again does the job

TFS TPC Databases and SQL 2012 availability groups

Worth noting that when you create a new TPC in TFS 2012, when the TFS configuration DB and other TPC DBs are in SQL 2012 availability groups, the new TPC DB is not placed in this or any other availability group. You have to add it manually, and historically remove it when servicing TFS. Though the need to remove it for servicing changes with TFS 2012.2 which allows servicing of high availability DBs

Recovering network isolated lab management environments if you have to recreate your SC-VMM server’s DB

Whilst upgrading our Lab Management system we lost the SC-VMM DB. This has meant we needed to recreate environments we already have running on Hyper_V hosts but were unknown to TFS. If they were not network isolated this is straight forward, just recompose the environment (after clear out the XML in the VM descriptions fields). However if they are network isolated and running, then you have do play around a bit.

This is the simplest method I have found thus far. I am interested to hear if you have a better way

  • In SC-VMM (or via PowerShell) find all the VMs in your environment. They are going to have names in the form Lab_[GUID]. If you look at the properties of the VMs in the description field you can see the XML that defines the Lab they belong to.

image

If you are not sure what VMs you need you can of course cross reference the internal machine names with the AD within the network isolated environment. remember this environment is running so you can login to it.

  • Via SC-VMM Shutdown each VM
  • Via SC-VMM store the VM in the library
  • Wait a while…….
  • When all the VMs have been stored, navigate to them in SC-VMM. For each one in turn open the properties and
    • CHECK THE DESCRIPTION XML TO MAKE SURE YOU HAVE THE RIGHT VM AND KNOW THEIR ROLE
    • Change the name to something sensible (not essential if you like GUIDs in environment members names, but as I think it helps) e.g change Lab_[guid] to ‘My Test DC’
    • Delete all the XML in the Description field
    • In the hardware configuration, delete the ‘legacy network’ and connect the ‘Network adaptor’ to your main network – this will all be recreated when you create the new lab

image

Note that a DC will not have any connections to your main network as it is network isolated. For the purpose of this migration it DOES need to be reconnected. Again this will be stored by the tooling when you create the new environment.

  • When all have been update in SC-VMM, open MTM and import the stored VMs into the team project
  • You can now create a new environment using these stored VM. It should deploy out OK, but I have found you might need to restart it before all the test agent connect correctly
  • And that should be it, the environment is known to TFS lab managed and is running network isolated

You might want to delete the stored VMs once you have the environment running. But this will down to your policies, they are not needed as you can store the environment as a whole to archive or duplicate it with network isolation.