But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Cannot run Microsoft Fakes based test if Typemock Isolator enabled

With Microsoft Fakes moving to the Premium SKU of Visual Studio in 2012.2 (CTP4 is now available) more people will be looking at using them.

I have just installed CTP4 and have seen a behaviour I don’t think I have not seen in the previous version of Visual Studio (I need to check because as well as CTP4  I have recently installed the new version of Typemock Isolator 7.3.0 that addresses issues with Windows 8 and Visual  Studio 2012).

Anyway the error you see when you run a fakes based test is ‘UnitTestIsolation instrumentation failed to initialialize, Please restart Visual Studio and rerun this test’

image

The solution is to disable Typemock Isolator (Menu Typemock > Suspend Mocking), when this is done, without a reboot, the Fakes based test run.

Does mean you can’t have a solution using both Fakes and Isolator, but why would you?

TFS TPC Databases and SQL 2012 availability groups

Worth noting that when you create a new TPC in TFS 2012, when the TFS configuration DB and other TPC DBs are in SQL 2012 availability groups, the new TPC DB is not placed in this or any other availability group. You have to add it manually, and historically remove it when servicing TFS. Though the need to remove it for servicing changes with TFS 2012.2 which allows servicing of high availability DBs

Recovering network isolated lab management environments if you have to recreate your SC-VMM server’s DB

Whilst upgrading our Lab Management system we lost the SC-VMM DB. This has meant we needed to recreate environments we already have running on Hyper_V hosts but were unknown to TFS. If they were not network isolated this is straight forward, just recompose the environment (after clear out the XML in the VM descriptions fields). However if they are network isolated and running, then you have do play around a bit.

This is the simplest method I have found thus far. I am interested to hear if you have a better way

  • In SC-VMM (or via PowerShell) find all the VMs in your environment. They are going to have names in the form Lab_[GUID]. If you look at the properties of the VMs in the description field you can see the XML that defines the Lab they belong to.

image

If you are not sure what VMs you need you can of course cross reference the internal machine names with the AD within the network isolated environment. remember this environment is running so you can login to it.

  • Via SC-VMM Shutdown each VM
  • Via SC-VMM store the VM in the library
  • Wait a while…….
  • When all the VMs have been stored, navigate to them in SC-VMM. For each one in turn open the properties and
    • CHECK THE DESCRIPTION XML TO MAKE SURE YOU HAVE THE RIGHT VM AND KNOW THEIR ROLE
    • Change the name to something sensible (not essential if you like GUIDs in environment members names, but as I think it helps) e.g change Lab_[guid] to ‘My Test DC’
    • Delete all the XML in the Description field
    • In the hardware configuration, delete the ‘legacy network’ and connect the ‘Network adaptor’ to your main network – this will all be recreated when you create the new lab

image

Note that a DC will not have any connections to your main network as it is network isolated. For the purpose of this migration it DOES need to be reconnected. Again this will be stored by the tooling when you create the new environment.

  • When all have been update in SC-VMM, open MTM and import the stored VMs into the team project
  • You can now create a new environment using these stored VM. It should deploy out OK, but I have found you might need to restart it before all the test agent connect correctly
  • And that should be it, the environment is known to TFS lab managed and is running network isolated

You might want to delete the stored VMs once you have the environment running. But this will down to your policies, they are not needed as you can store the environment as a whole to archive or duplicate it with network isolation.

Fix for ‘Cannot install test agent on these machines because another environment is being created using the same machines’

We have recently been upgrading our TFS 2012QU1 Lab Management system from SCVMM 2008 to SCVMM 2012 SP1. This has not been the nicest experience, we are preparing at least one joint blog post from Rik, Rob and myself on the joys, yes it did take everyone, a lovely cross skill problem.

We are now in the process of sorting out the state of running environments transferred between the systems. I thought I would start with an easy one, a single non-domain joined VM.

So after the upgrade this VM appeared as a running VM on one of the Hyper-V hosts, I could add it to a new environment, which I did. All appeared OK, the environment was created but Lab Management could not upgrade the Test Agents. If you tried the ‘reinstall agents’ option in Test Manager you got the error ‘cannot install test agent on these machines because another environment is being created using the same machines’

image

All very confusing and a hunt on the web found nothing on this error.

The fix I found (might be another way) was to

  1. Connect to the VM via MTM
  2. Uninstall the TFS 2012 RTM Test Agent.
  3. Install the TFS 2012 QU1 test agent from the Test Agents ISO (which I had unpacked to network share so I could run the vstf_testagent.exe easily)
  4. When the install was complete the configuration wizard ran, I set the agent to run as the local administrator and pointed it at one of our test controllers

image

Once the Test Agent configuration completed and the agent restarted it connected to the test controller and the environment reported itself as being ready.

So one (simple) environment down, plenty more to go

Running an external command line tool as part of a Wix install

I have recently been battling running a command line tool within a Wix 3.6 installer. I eventually got it going but learnt a few things. Here is a code fragment that outlines the solution.

<Product ………>
……… loads of other Wix bits

<!-- The command line we wish to run is set via a property. Usually you would set this with <Property /> block, but in this case it has to be done via a
       
CustomAction as we want to build the command from other Wix properties that can only be evaluated at runtime. So we set the
       whole command line  including command line arguments as a CustomAction that will be run immediately i.e. in the first phase of the MSIExec process  
       while the command set is being built.

     Note that the documentation say the command line should go in a property called QtExecCmdLine, but this is only true if the CustomAction
    is to be run immediately. The CustomAction to set the property is immediate but the command line’s execution CustomAction is deferred, so we
    have to set the property to the name of the CustomAction and not QtExecCmdLine  -->

<CustomAction Id='PropertyAssign' Property='SilentLaunch' Value='&quot;[INSTALLDIR]mycopier.exe&quot; &quot;[ProgramFilesFolder]Java&quot; &quot;[INSTALLDIR]my.jar&quot;' Execute='immediate' />

  <!—Next we define the actual CustomAction that does the work. This needs to be deferred (until after the files are installed) and set to not be impersonated
          so runs as the same elevated account as the rest of the MSIExec actions. (assuming your command line tool needs admin rights -->
  <CustomAction Id="SilentLaunch" BinaryKey="WixCA"  DllEntry="CAQuietExec" Execute="deferred" Return="check" Impersonate="no" />
 
  <!—Finally we set where in the install sequence the CustomActions and that they are only called on a new install
          Note that we don't tidy up the actions of this command line tool on a de-install -->
  <InstallExecuteSequence>
   <Custom Action="PropertyAssign" Before="SilentLaunch">NOT Installed </Custom>
   <Custom Action="SilentLaunch" After="InstallFiles">NOT Installed </Custom>
  </InstallExecuteSequence>

 
</Product>

So the usual set of non-obvious Wix steps, but we got there in the end

More in rights being stripped for the [team project]\contributors group in TFS 2012 when QU1 applied and how to sort it.

I recently wrote a post that discussed how the contributor rights had been stripped off areas in TFS 2012 server when QU1 was applied, this included details on the patches to apply the manual steps to resolve the problem.

Well today I found that it is not just in the area security you can see this problem. We found it too in the main source code repository. Again the [Team project]\contributors group was completely missing. I had to re-add it manually. Once this was done all was OK for the users

image

FYI: You might ask how I missed this before, most of the  users  on this project had higher levels of rights granted by being members of other groups. It was not until someone was re-assigned between team we noticed.

Getting Windows Phone 7.8 on my Lumia 800

Microsoft have release Windows Phone 7.8 in the last few days. As usual the rollout appears to be phased, I think based on serial number of your phone. As with previous versions you can force the update, so jumping the phased rollout queue. The process is

  1. Put the phone in flight mode (so no data connection)
  2. Connect it to your PC running Zune, it will look to see if there is an OS update. If it finds it great, let it do the upgrade
  3. If it does not find it, select the settings menu (top right in Zune)
  4. You need to select the update menu option on the left menu
  5. Zune will check for an update, about a second or two after it starts this process disconnect the PC from the Internet. This should allow Zune to get a list of updates, but not the filter list of serial numbers. So it assume the update is for you.
  6. You should get the update available message, reconnect the internet (it needs to download the file) and continue to do the upgrade

You will probably have to repeat step 5 a few times to get the timing correct

I also had to repeat whole process 3 three for 3 different firmware and OS updates before I ended up with 7.8.

image

But now I have multi size tiles and the new lock screen.

Or if you don’t fancy all the hassle you could just wait a few days

Fixing area permission issues when creating new teams in TFS 2012 after QU1 has been installed

[Updated 4 Fe 2013 See http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx for the latest on this ]

One of the side effects of the problems we had with TFS 2012 QU1 was that when we created a new team within a team project contributors had no rights to the teams default Area. The workaround was that we had to add these rights manually, remembering to add these as you would expect is something you forget all the time, so it would be nice to fix the default.

The solution it turns out is straight forward, any new team gets the area rights inherited from the default team/root of the team project.

  1. Open the TFS web based control panel 
  2. Select the Team Project Collection
  3. Select the Team Project
  4. Select the ‘Areas’
  5. Select the root node (has the same name as the Team Project)
  6. Using the drop down menu to the left of the checkbox, select security
  7. Add the Contributor TFS Group and grant it the following rights

image

These settings will be used as the template for any new teams created with the Team Project.

My session today at Modern Jago

Thanks  to everyone who came along to the Microsoft event today at Modern Jago. I hope you all found it useful. I got feedback from a few people that my tip on not trusting company WIFI when trying to do remote debugging of Windows RT devices was useful (or any other type of device for that matter).

I have seen too many corporate level Wifi implementation, and a surprising number of home ASDL/Wifi routers, doing isolation between WiFi clients. So each client can see the internet fine, but not any another Wifi devices. My usual solution is as I did today, use a MiFi or phone as a basic Wifi hub, they are both too dumb to try anything as complex as client isolation. Or look on your Wifi hub to check if you can disable client isolation.

More on HDD2 boot problems with my Crucial M4-mSATA

I have been battling my Crucial M4-mSATA 256Gb SDD for a while now. The drive seems OK most of the time, but if for any reason my PC crashes (i.e. a blue screen, which I have found is luckily rare on Windows8) the PC will not start-up giving a ‘HDD2 cannot be found’ error during POST.

I had not had this problem for a few months, so though it was fixed, then BANG yesterday Windows crashed out the blue (I was writing a document in Word whilst listening to music, not exactly a huge load for Core i7) and I hit the start-up problem. Of course I had been working on the document all afternoon and was relying on auto-save, not doing a real Ctrl S save to a remote network drive, so I expected to have lost everything.

A few attempts at a reboot, using tricks that worked in the past got me nowhere. After a bit more digging in forums I found this new process suggested as a ‘fix’ from Crucial

  1. Plug the system into the mains, then start the system you will get the disk not found error, go into the BIOS settings
  2. Leave the PC running, but doing nothing for 20 minutes. As you are in BIOS there will be no activity for the SDD, this gives it a chance to do a self test and sort itself out.
  3. Switch off the system, unplug from the mains and pull the battery out for 30 seconds
  4. Plug the system back in and it hopefully it will restart without error
  5. If not repeat step 1 – 4 until you have had enough.

Well this process got me going, and it does sort of fit with the procedures I had tried before, they all gave the SDD time to self test after a crash. However, I really needed a better fix, this is my main PC it needs to be reliable. So I checked to see if there was any new firmware releases from Crucial, and it seems there is. I had 04MF and now there is 04MH. Version 04MH includes the following changes:

  • Improved robustness in the event of an unexpected power loss. Significantly reduces the incidence of long reboot times after an unexpected power loss.
  • Corrected minor status reporting error during SMART Drive Self Test execution (does not affect SMART attribute data).
  • Streamlined firmware update command for smoother operation in Windows 8.
  • Improved wear leveling algorithms to improve data throughput when foreground wear leveling is required.

So well worth a try it would seem. Only issue is my SSD is bitlockered, was this going to be a problem? It takes ages to remove it and reapply it.

Well I thought I would risk the update without changing bitblocker (as I had now got the important data off the SDD). So I

  1. Downloaded the Windows 8 firmware tool and current release from Crucial.
  2. Ran it, it warned about backups, and BIOS encryption (which had me a bit worried, but what the hell!)
  3. Accepted the license
  4. Selected my SDD and told it to upgrade
  5. And waited……..
  6. And waited…….., the issue is the tool does not really give you much indication you actually hit the update button, and disk activity is also very patchy. Basically the PC looks to have hung.
  7. However, after about 5 minutes the application came back, tried to run again as I had pressed update twice and promptly crashed. However, it had done the upgrade.
  8. I re-ran the tool and it told me the drive was now at 04MH

I rebooted the PC and all seemed OK, but only time will tell.