But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Running an external command line tool as part of a Wix install

I have recently been battling running a command line tool within a Wix 3.6 installer. I eventually got it going but learnt a few things. Here is a code fragment that outlines the solution.

<Product ………>
……… loads of other Wix bits

<!-- The command line we wish to run is set via a property. Usually you would set this with <Property /> block, but in this case it has to be done via a
       
CustomAction as we want to build the command from other Wix properties that can only be evaluated at runtime. So we set the
       whole command line  including command line arguments as a CustomAction that will be run immediately i.e. in the first phase of the MSIExec process  
       while the command set is being built.

     Note that the documentation say the command line should go in a property called QtExecCmdLine, but this is only true if the CustomAction
    is to be run immediately. The CustomAction to set the property is immediate but the command line’s execution CustomAction is deferred, so we
    have to set the property to the name of the CustomAction and not QtExecCmdLine  -->

<CustomAction Id='PropertyAssign' Property='SilentLaunch' Value='&quot;[INSTALLDIR]mycopier.exe&quot; &quot;[ProgramFilesFolder]Java&quot; &quot;[INSTALLDIR]my.jar&quot;' Execute='immediate' />

  <!—Next we define the actual CustomAction that does the work. This needs to be deferred (until after the files are installed) and set to not be impersonated
          so runs as the same elevated account as the rest of the MSIExec actions. (assuming your command line tool needs admin rights -->
  <CustomAction Id="SilentLaunch" BinaryKey="WixCA"  DllEntry="CAQuietExec" Execute="deferred" Return="check" Impersonate="no" />
 
  <!—Finally we set where in the install sequence the CustomActions and that they are only called on a new install
          Note that we don't tidy up the actions of this command line tool on a de-install -->
  <InstallExecuteSequence>
   <Custom Action="PropertyAssign" Before="SilentLaunch">NOT Installed </Custom>
   <Custom Action="SilentLaunch" After="InstallFiles">NOT Installed </Custom>
  </InstallExecuteSequence>

 
</Product>

So the usual set of non-obvious Wix steps, but we got there in the end

More in rights being stripped for the [team project]\contributors group in TFS 2012 when QU1 applied and how to sort it.

I recently wrote a post that discussed how the contributor rights had been stripped off areas in TFS 2012 server when QU1 was applied, this included details on the patches to apply the manual steps to resolve the problem.

Well today I found that it is not just in the area security you can see this problem. We found it too in the main source code repository. Again the [Team project]\contributors group was completely missing. I had to re-add it manually. Once this was done all was OK for the users

image

FYI: You might ask how I missed this before, most of the  users  on this project had higher levels of rights granted by being members of other groups. It was not until someone was re-assigned between team we noticed.

Getting Windows Phone 7.8 on my Lumia 800

Microsoft have release Windows Phone 7.8 in the last few days. As usual the rollout appears to be phased, I think based on serial number of your phone. As with previous versions you can force the update, so jumping the phased rollout queue. The process is

  1. Put the phone in flight mode (so no data connection)
  2. Connect it to your PC running Zune, it will look to see if there is an OS update. If it finds it great, let it do the upgrade
  3. If it does not find it, select the settings menu (top right in Zune)
  4. You need to select the update menu option on the left menu
  5. Zune will check for an update, about a second or two after it starts this process disconnect the PC from the Internet. This should allow Zune to get a list of updates, but not the filter list of serial numbers. So it assume the update is for you.
  6. You should get the update available message, reconnect the internet (it needs to download the file) and continue to do the upgrade

You will probably have to repeat step 5 a few times to get the timing correct

I also had to repeat whole process 3 three for 3 different firmware and OS updates before I ended up with 7.8.

image

But now I have multi size tiles and the new lock screen.

Or if you don’t fancy all the hassle you could just wait a few days

Fixing area permission issues when creating new teams in TFS 2012 after QU1 has been installed

[Updated 4 Fe 2013 See http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx for the latest on this ]

One of the side effects of the problems we had with TFS 2012 QU1 was that when we created a new team within a team project contributors had no rights to the teams default Area. The workaround was that we had to add these rights manually, remembering to add these as you would expect is something you forget all the time, so it would be nice to fix the default.

The solution it turns out is straight forward, any new team gets the area rights inherited from the default team/root of the team project.

  1. Open the TFS web based control panel 
  2. Select the Team Project Collection
  3. Select the Team Project
  4. Select the ‘Areas’
  5. Select the root node (has the same name as the Team Project)
  6. Using the drop down menu to the left of the checkbox, select security
  7. Add the Contributor TFS Group and grant it the following rights

image

These settings will be used as the template for any new teams created with the Team Project.

My session today at Modern Jago

Thanks  to everyone who came along to the Microsoft event today at Modern Jago. I hope you all found it useful. I got feedback from a few people that my tip on not trusting company WIFI when trying to do remote debugging of Windows RT devices was useful (or any other type of device for that matter).

I have seen too many corporate level Wifi implementation, and a surprising number of home ASDL/Wifi routers, doing isolation between WiFi clients. So each client can see the internet fine, but not any another Wifi devices. My usual solution is as I did today, use a MiFi or phone as a basic Wifi hub, they are both too dumb to try anything as complex as client isolation. Or look on your Wifi hub to check if you can disable client isolation.

More on HDD2 boot problems with my Crucial M4-mSATA

I have been battling my Crucial M4-mSATA 256Gb SDD for a while now. The drive seems OK most of the time, but if for any reason my PC crashes (i.e. a blue screen, which I have found is luckily rare on Windows8) the PC will not start-up giving a ‘HDD2 cannot be found’ error during POST.

I had not had this problem for a few months, so though it was fixed, then BANG yesterday Windows crashed out the blue (I was writing a document in Word whilst listening to music, not exactly a huge load for Core i7) and I hit the start-up problem. Of course I had been working on the document all afternoon and was relying on auto-save, not doing a real Ctrl S save to a remote network drive, so I expected to have lost everything.

A few attempts at a reboot, using tricks that worked in the past got me nowhere. After a bit more digging in forums I found this new process suggested as a ‘fix’ from Crucial

  1. Plug the system into the mains, then start the system you will get the disk not found error, go into the BIOS settings
  2. Leave the PC running, but doing nothing for 20 minutes. As you are in BIOS there will be no activity for the SDD, this gives it a chance to do a self test and sort itself out.
  3. Switch off the system, unplug from the mains and pull the battery out for 30 seconds
  4. Plug the system back in and it hopefully it will restart without error
  5. If not repeat step 1 – 4 until you have had enough.

Well this process got me going, and it does sort of fit with the procedures I had tried before, they all gave the SDD time to self test after a crash. However, I really needed a better fix, this is my main PC it needs to be reliable. So I checked to see if there was any new firmware releases from Crucial, and it seems there is. I had 04MF and now there is 04MH. Version 04MH includes the following changes:

  • Improved robustness in the event of an unexpected power loss. Significantly reduces the incidence of long reboot times after an unexpected power loss.
  • Corrected minor status reporting error during SMART Drive Self Test execution (does not affect SMART attribute data).
  • Streamlined firmware update command for smoother operation in Windows 8.
  • Improved wear leveling algorithms to improve data throughput when foreground wear leveling is required.

So well worth a try it would seem. Only issue is my SSD is bitlockered, was this going to be a problem? It takes ages to remove it and reapply it.

Well I thought I would risk the update without changing bitblocker (as I had now got the important data off the SDD). So I

  1. Downloaded the Windows 8 firmware tool and current release from Crucial.
  2. Ran it, it warned about backups, and BIOS encryption (which had me a bit worried, but what the hell!)
  3. Accepted the license
  4. Selected my SDD and told it to upgrade
  5. And waited……..
  6. And waited…….., the issue is the tool does not really give you much indication you actually hit the update button, and disk activity is also very patchy. Basically the PC looks to have hung.
  7. However, after about 5 minutes the application came back, tried to run again as I had pressed update twice and promptly crashed. However, it had done the upgrade.
  8. I re-ran the tool and it told me the drive was now at 04MH

I rebooted the PC and all seemed OK, but only time will tell.

TF237111 errors when trying to add work items to the backlog after TFS 2012 QU1 is applied

[Updated 4 Feb 2013 See http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx for the latest on this ]

I posted earlier in the week about my experiences with the post TFS 2012 QU1 hotfix. When I posted I thought we had all our problems sorted, we did for new team projects, but it seems still had an issue for teams on our team projects that were created prior to the upgraded from RTM to QU1. As I said in the past post we got into this position due to trying to upgraded a TPC form RTM to QU1 by detaching from the 2012 RTM server and attaching to a 2012 QU1 server – this is not the recommended route and caused us to suffer the problem the KB2795609 patch addresses.

The problem we still had was follows:

  • I have two users a Team  Project called ‘BM’ who are in the team called ‘Bad TP’
    • Richard (the Team project creator and administrator)
    • Fred (a Team Project contributor)
  • All is fine for Richard, he can see the team’s product backlog and add items to it.
  • Fred can get to the team backlog page in the web client, but cannot see any work items and gets a TF237111 error if they try to add a new work item

image

  • The quick fix was to make Fred a team project administrator, but not a long term solution
  • We checked the following rights
    • Richard was a member of basically all the groups on the ‘BM’ team project (he was the creator so that was expected), the important ones were [BM\Project administrators, [BM]\contributors and ‘Bad TP’
    • Fred was a member of the [BM]\contributors  and ‘Bad TP’ team

clip_image001[6]

    • The ‘Bad TP’ team had the following permissions

clip_image001

So all these permissions looked OK as you would expect. What I had forgotten was that the team model in TFS 2012 is build around the Area’s hierarchy. This has security permissions too. To check this

  • Go to the Admin page for ‘Bad TP’
  • Click the “Areas” tab
  • Right click the “default area” for the team and select “security”
  • We had expect to see some like this

image

  • However there was no entry at all for the Contributors group.
  • I added this in and had to explicitly set the four ‘inherited allow‘ permissions to ‘allow’ and everything started to work.

So the problem was that during the problematic upgraded we had managed to strip off all the contributor group entries from area in the existing Team Project. The clue was actually in the TF237111 error as this does mention permissions are the area path.

So now we know we can fix the issue. It should be noted that any new teams created in the team project seem to not get this right applied, so we have to remember to added it when we create a new team.

Incorrect IIS IP Bindings and TFS Server Url

By default the TFS server uses http://localhost:8080/tfs as it’s Server URL, this is the URL used for internal communication, whereas the Notification Url is the one TFS tells client to communicate to it via. Both these Urls can be changed via the Team Foundation Server Console, but I find you do not usually need to change the Server Url, only the notification one.

image

I hit a problem recently on a site where if you tried to edit the Team Project Collection Group Membership (via the web or TFS admin console) you got a dialog popping up saying  ‘HTTP 400 error’. Now this you have to say looks like a URL/binding issue, the tools cannot find an end point.

Turns out the issue was that there had been a IP addressing schema changes on the network. The different services on the network had been assigned their own IP addresses (as well as the host having its own IP address) e.g. On our TFS server we might have

  • 10.0.0.1 – physicalservername.domain.com
  • 10.0.1.1 – tfs2012.domain.com
  • 10.0.1.2 – sharepoint.domain.com

This is all well end good, but a mistake had been made in the bindings in IIS during the reconfiguration.

image

The HTTPS bind was correct the hostname matched the IP address, this has to be the case else SSL does not work. However, the HTTP port 8080 should have been bound  to all IP Addresses (i.e. no hostname and the * IP address as above). On the site, HTTP was bound to a specific IP address. This was fine if a client connected to http://tfs2012.domain.com:8080/tfs (which resolved to the correct address), but failed for http://loclahost:8080/tfs  as the binding did not match.

Once the edit was made to remove the hostname all was OK (the other option would have been to alter the server Url to match)

So problem fixed, the strangest thing is that this issue only appeared to effect setting TPC group membership, everything else was fine.

Experiences applying TFS 2012 QU1 and it subsequent hotfix

Brian Harry posted last week about a hotfix for TFS 2012 QU1 (KB2795609). This should not be needed by most people, but as his post points out does fix issues for a few customers. Well we were one of those customers. When upgrading from 2012 RTM to 2012 QU1 we had attempted what with hindsight was an over ambitious hardware migration too. This involved swapping our data tier from a SQL 2012 instance to a new 2012 availability group and merging team project collections from different server as well as applying the QU1. Our migration plan contained some team project collection detach/attach steps hence getting into the area this hotfix addresses.

The end point was we ended up with a QU1 upgraded server, but we could only get users connected if we made them team project administrators, a valid short term solution, but something we needed to fix.

We therefore applied the new KB2795609 patch, but hit a gotcha that you should be aware of

  • We ran the patch EXE on our TFS server that was showing the problem.
  • This ran without error, taking about 5 minutes
  • We tried to connect to the patched TFS server via the web client and VS2012, we could make a connection to TFS but could open any TPCs
  • On checking the TFS admin console we saw the TPC was offline and reporting that the servicing had failed (but this had not been reported back via the patch tool)
  • We reran the servicing job (via the TFS admin console) but it failed in the core step we saw in the logs

[Error] TF400744: An error occurred while executing the following script: TurnOnRCSI.sql. Failed batch starts on the line 1. Statement line: 1. Script line: 1. Error: 5069 ALTER DATABASE statement failed.

  • Our TFS DBs are now stored with a SQL 2012 availability group, during the upgrade to QU1 we had seen problems applying the upgrade unless we removed the DBs from the availability groups. So we removed the tfs_configuration and tfs_[mytpc] from availability groups and re applied the servicing job and all was OK
  • Once the servicing of the TPC was completed it went online as expected.
  • We then put the DBs back into the availability group
  • We could then remove the users from the team project administrators group as their previous rights were working again.

So we now had a patched and working TFS 2012 QU1 server. Lets hope that QU2 is a little smoother and we don’t need the direct help of product group, who I must say have been great in getting this problem addressed. I really like the openness we see in Brian’s blog of both the good and the bad.

Why can’t I create an environment using a running VM on my Lab Management system?

With TFS lab management you can build environments from stored VM and VM templates stored in an SCVMM library or from VMs running on a Hyper-V host within your lab infrastructure. This second form is what used to be called composing an environment in TFS 2010. Recently when I tried to compose an environment I had a problem. After selecting the running VM inside the new environment wizard I got the red star that shows an error in the machine properties

image

Now I would only expect to see this when creating an environment with a VM templates as a red star usually means the OS profile is not set e.g. you have missed a product key, or passwords don’t match. However, this was a running VM so there were no settings I could make, and no obvious way to diagnose the problem. After a few email with Microsoft Lab management team we go to the bottom of the problem, it was all down to the Hyper-V hosts network connections, but that is rushing ahead, first lets see why it was a confusing problem.

First the red herring

We now know the issue was the Hyper-V host network, but at first it looked like I could compose some guest VMs but not others. I wrongly assumed the issue was some bad meta-data or corrupt settings within the VMs. Tthis problem all started after a server crash and so we were fearing corruption, which clouded our thoughts.

The actual reason some VMs could be composed and some could not was dependant on which Hyper-V host they were running on. Not the VMs themselves.

The diagnostic steps

To get to the root of this issue a few commands and tools were used. Don’t think for a second there was not a lot of random jumping about and trial and error. In this post I am just going to point out what was helpful.

Firstly you need to use the TFSConfig command on your TFS server to find out your network location setting. So run

C:\Program Files\Microsoft Team Foundation Server 11.0\Tools>tfsconfig lab /settings /list
SCVMM Server Name: vmm.blackmarble.co.uk
Network Location: VSLM Network Location
IP Block: 192.168.23.0/24
DNS Suffix: blackmarble.co.uk

Next you need to see which, if any, of your Hyper-V hosts are connected to this location. You can do this in a few graphically ways in SCVMM (and I am sure via PowerShell too)

If you select a Hyper-V host in SCVVM, right click and select View networking. On a healthy host you see the VSLM network location connected to external network adaptor the VMs are using

image

On my failing Hyper-V host the VSLM network was connected to an empty network port

image

You can also see this on the SCVMM > host (right click) > properties. If you look on  the networking tab for the main virtual network  you should see the VSLM network as the location. On the failing Hyper-V host this location was empty.

image

The solution

You would naively think selecting the edit option on the screen shot above would allow you to enter the VSLM Network as the location, but no. Not on that tab. You need to select the hardware tab.

image

You can then select the correct network adaptor and override the discovered network location to point to the VSLM Network Location. Once this was done I could compose environments as I would expect.

I have said it before, but Lab Management has a lot of moving parts, and they all must be setup right else nothing works. A small configuration error can seriously ruin your day.