But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Why am I getting ‘cannot access outlook.ost’ issues with Office 365 Lync?

We use O365 to provide Lync messaging. So when I rebuilt my PC I thought I needed to re-install the client; so I logged into the O365 web site and selected the install option. Turns out this was a mistake. I had Office 2013 installed, so I already had the client, I just had not noticed.

If you do install O365 Lync client (as well as Office 2013 one) you get file access errors reported with your outlook.ost files. If this occurs, just un-install the O376 client and use the one in Office 2013, the errors go away

Errors running tests via TCM as part of a Release Management pipeline

Whilst getting integration tests running as part of a Release Management  pipeline within Lab Management I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)

The issue was two fold..

Permissions/Path Problems accessing the build drops location

The build drops location passed is passed into the component using the argument $(PackageLocation). This is pulled from the component properties, it is the TFS provided build drop with a \ appended on the end.

image 

Note that the \ in the text box is there as the textbox cannot be empty. It tells the component to uses the root of the drops location. This is the issue, as when you are in a network isolated environment and had to use NET USE to authenticate with a the TFS drops share the trailing \ causes a permissions error (might occur in other scenarios too I have not tested it).

Removing the slash or adding a . (period) after the \ fixes the path issue, so..

  • \\server\Drops\Services.Release\Services.Release_1.0.227.19779        -  works
  • \\server\Drops\Services.Release\Services.Release_1.0.227.19779\      - fails 
  • \\server\Drops\Services.Release\Services.Release_1.0.227.19779\.     - works 

So the answer is add a . (period) in the pipeline workflow component so the build location is $(PackageLocation). as opposed to $(PackageLocation) or to edit the PS1 file that is run to do some validation to strip out any trailing characters. I chose the later, making the edit

if ([string]::IsNullOrEmpty($BuildDirectory))
    {
        $buildDirectoryParameter = [string]::Empty
    } else
    {
        # make sure we remove any trailing slashes as the cause permission issues
        $BuildDirectory = $BuildDirectory.Trim()
        while ($BuildDirectory.EndsWith("\"))
        {
            $BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)
        }
        $buildDirectoryParameter = "/builddir:""$BuildDirectory"""
    }
   

Cannot find the TRX file even though it is present

Once the tests were running I still had an issue that even though TCM had run the tests, produced a .TRX file and published it’s contents back to TFS, the script claimed the file did not exist and so could not pass the test results back to Release Management.

The issue was the call being used to check for the file existence.

[System.IO.File]::Exists($testRunResultsTrxFileName)

As soon as I swapped to the recommended PowerShell way to check for files

Test-Path($testRunResultsTrxFileName)

it all worked.

‘Test run must be created with at least one test case’ error when using TCM

I have been setting up some integration tests as part of a release pipeline. I am using TCM.EXE to trigger tests from the command line. Something along the lines

TCM.exe run /create /title:"EventTests" /collection:"http://myserver:8080/tfs” /teamproject:myteamproject /testenvironment:"Integration" /builddir:\\server\Drops\Build_1.0.226.1975”  /include /planid:26989  /suiteid:27190 /configid:1

I kept getting the error

‘A test run must be created with at least one test case’

Strange thing was my test suite did contains a number of test, and they were marked as active.

The issue was actually the configid it was wrong, there is no easy way to check them from the UI. use the following command to get a list of valid IDs

TCM.exe configs /list   /collection:"http://myserver:8080/tfs” /teamproject:myteamproject

Id        Name
--------- ----------------------------------------------------------------
35        Windows 8.1 ARM
36        Windows 8.1 64bit
37        Windows 8.1 ATOM
38        Default configuration created @ 11/03/2014 12:58:15
39        Windows Phone 8.1

Your can now use the correct ID, not one you had to guess

Linking VSO to your Azure Subscription and Azure Active Directory

I have a few old Visual Studio Online (VSO) accounts (dating back to TFSPreview.com days). We use them to collaborate with third parties, it was long overdue that I tidied them up; as a problem historically has been that all access to VSO has been using a Microsoft Accounts (LiveID, MSA), these are hard to police, especially if users mix personal and business ones.

The solution is to link your VSO instance to an Azure Active Directory (AAD). This means that only users listed in the AAD can connect to the VSO instance. As this AAD can be federated to an on-prem company AD it means that the VSO users can be either

  • Company domain users
  • MSA accounts specifically added to AAD

Either way it gives the AAD administrator an easy way to manage access to VSO. A user with a MSA, even if an administrator in VSO cannot add any unknown users to VSO. For details see MSDN. All straight forward you would think, but it I had a few issues.

The problem was I had setup my VSO accounts using a MSA in the form user@mycompany.co.uk, this was also linked to my MSDN subscription.  As part of the VSO/AAD linking process I needed to add the MSA user@mycompany.co.uk to our AAD, but I could not. The AAD was setup for federation of accounts in the mycompany.com domain, so you would have thought I would be OK, but back in our on-prem AD (the one it was federated to) I had  user@mycompany.co.uk as an email alias for user@mycompany.com. Thus blocked the adding of the user to AAD, hence I could got link VSO to Azure.

The answer was to

  1. Add another MSA account to the VSO instance, one unknown to our AD even as an alias e.g. user@live.co.uk 
  2. Make this user the owner of the VSO instance.
  3. Add the user@live.co.uk MSA to the AAD directory
  4. Make them an Azure Subscription administrator.
  5. Login to the Azure portal as this MSA, once this was done the VSO could be linked to the AAD directory.
  6. I could then make an AAD user (user@mycompany.com) a VSO user and then the VSO owner
  7. The user@live.co.uk MSA could then be deleted from VSO and AAD
  8. I could then login to VSO as  my user@mycompany.com AAD account, as opposed to the old user@mycompany.co.uk MSA account

Simple wasn’t it!

We still had one problem, and that was user@mycompany.com was showing as a basic user in VSO, if you tried to set it to MSDN eligible flipped back to basic.

The problem here was we had not associated the AAD account user@mycompany.com with the MSA account user@mycompany.co.uk in the MSDN portal (see MSDN).

Once this was done it all worked as expected, VSO picking up that my AAD account had a full MSDN subscription.

Video card issues during install of Windows 8.1 causes very strange issues

Whilst repaving my Lenovo W520 I had some issues with video cards. During the initial setup of Windows the PC hung. I rebooted, re-enabled in the BIOS the problematic video card and I thought all was OK. The installation appeared to pickup where it left off. However, I started to get some very strange problems.

  • My LiveID settings did not sync from my other Windows 8.1 devices
  • I could not change my profile picture
  • I could not change my desktop background
  • I could not change my screen saver
  • And most importantly Windows Update would not run

I found a few posts that said all of these problems could be seen when Windows was activated, but that was not the issue for me. It showed as being activated, changing the product key had no effect.

In the end I re-paved my PC again, making sure my video cards were correctly enabled so there was no hanging, and this time I seem to have a good Windows installation

Issues repaving the Lenovo W520 with Windows 8.1 - again

Updated 19th Nov 2014

Every few months I find a PC needs to be re-paved – just too much beta code has accumulated. I reached this point again on my main 4 year old Lenovo W520 recently. Yes it is getting on a bit in computer years but it does the job; the keyboard is far nicer than the W530 or W540’s we have and until an ultrabook is shipped with 16Gb of memory (I need local VMs, too many places I go to don’t allow me to get to VMs on Azure) I am keeping it.

I have posted in the past about the issue with the W520 (or any laptop that uses the Nvidia Optimus system), well that struck again, with a slight twist to confuse me.

Our IT team have moved to System Center to give a self provisioning system to our staff, so I …

  • Connected my PC (that had Windows 8.1 on it) to the LAN with Ethernet
  • Booted using PXI boot (pressed the blue ThinkVantage button, then F12 to pick the boot device)
  • As the PC was registered with our System Center it found a boot image, reformatted by disk and loaded our standard Windows 8.1 image
  • It rebooted and then it hung….

It was the old video card issue. The W520 has a Intel GPU on the i7 CPU and also a separate Nvidia Quatro GPU. Previously I had the Intel GPU disabled in BIOS as I have found that having both enabled means it is very hard to connect to projector when presenting (but remember you do need both GPUs enabled if you wish to use two external monitors and the laptop display, but I don’t do this). However, you do need the Intel GPU to install Windows. The problem is Windows setup gets confused if it just sees the Nvidia for some reason. You would expect it to treat it as basic VGA until it gets drivers, but it just locks.

  • So I rebooted the PC, enable the Intel GPU in BIOS (leaving the Nvidia enabled too) and Windows setup picked up where it left off and I thought I had a rebuild my PC.

Even with the problems this was very quick to get a domain joined PC. I then started to install the applications using a mixture of System Center Software Center and Chocolatey.

However I knew I would hit the same problem with projectors, so I went back into BIOS and disabled the Intel GPU. The PC booted fine, worked for a minute or two then hung. This was strange as this same configuration had been working with Windows 8,1 before the re-format!

So I re-enabled the Intel GPU, and all seemed OK, until I tried to use Visual Studio 2013. This loaded OK, but crashed within a few seconds. The error log showed

Faulting application name: devenv.exe, version: 12.0.30723.0, time stamp: 0x53cf6f00
Faulting module name: igdumd32.dll_unloaded, version: 9.17.10.3517, time stamp: 0x532b0b5b

The igdum32.dll is an Intel driver. So I disabled the Intel adaptor, this time via Admin Tools > Computer Manager > Device Manager. Visual Studio now loaded OK. I found I could re-enable the Intel GPU after Visual Studio  loaded without issue. So the problem was something to do with the extended load process.

So I had a usable system, but still had problems when using a projector.

The solution in the end was simple – remove the Intel Drivers

  • In Admin Tools > Computer Manager > Device Manager delete the Intel GPU – Select the option to delete the drivers too
  • Reboot the PC and in BIOS disable the integrated Intel GPU
  • When the PC reboot it will just use the Nvidia GPU

The key here is to delete the Intel drivers, the basic fact of their presence, whether running or not, cause the problems either to the operating system or Visual Studio depending on your BIOS settings

Updated 19th Nov 2014

Turns out my repave had other issues, the issue with the Intel drivers during initial Windows setup had corrupted the OS, I had to start again. On this second attempt I got different results. I found I DID NOT have to remove the Intel drivers. I just needed to disable the Intel GPU in the BIOS to get my laptop working OK with projectors

Cannot see a TFS drops location from inside a network isolated environment for Release Management

I have posted before about using Release Management with Lab Management network isolation. They key is that you must issue a NET USE command at the start of the pipeline to allow the VMs in the isolated environment the ability to see the TFS drops location.

I hit a problem today that a build failed even though I had issued the NET USE.

I got the error

Package location '\\store\drops\Sabs.Main.CI\Sabs.Main.CI_2.5.178.19130\\' does not exist or Deployer user does not have access.

Turns out the problem was I had issued the NET USE as \\store.blackmarble.co.uk\drops not \\store\drops it is vital the names match up. Once I changed this my NET USE all was OK

Ordering rows that use the format 1.2.3 in a SQL query

Whilst working on a SSRS based report I had hit a sort order problem. Entries in a main report tablix needed to be sorted by their OrderNumber but this was in the form 1.2.3; so a neither a numeric or alpha sort gave the correct order i.e. a numeric sort fails as 1.2.3 is not a number and an alpha sort worked but give the incorrect order 1.3, 1.3.1, 1.3.10, 1.3.11, 1.3.12, 1.3.2.

When I checked the underlying SQL it turned out the OrderNumber was being generated, it was not a table column. The raw data was in a single table that contained all the leaf nodes in the hierarchy, the returned data was built by a SPROC using a recursive call.

The solution was to also calculate a SortOrder as well as the OrderNumber. I did this using a Power function on each block of the OrderNumber and added the results together. In the code shown below we can have any number of entries in the first block and up to 999 entries in the second or third block. You could have more by altering the Power function parameters

declare @EntryID as nvarchar(50) = ‘ABC1234';

WITH SummaryList AS
    (
        SELECT
        NI.ItemID,
        NI.CreationDate,
        'P' as ParentOrChild,
        cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as nvarchar(100)) as OrderNumber,
        cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) *  Power(10,6) as int) as SortOrder
        FROM dbo.NotebookItem AS NI
        WHERE NI.ParentID IS NULL AND NI.EntryID = @EntryID

        UNION ALL

        SELECT
        NI.ItemID,
        NI.CreationDate,
        'L' as ParentOrChild,
        cast(SL.OrderNumber + '.' + cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as nvarchar(100)) as nvarchar(100)) as OrderNumber,
        SL.SortOrder + (cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as int) * power(10 ,6 - (3* LEN(REPLACE(sl.OrderNumber, '.', ''))))) as SortOrder
        FROM dbo.NotebookItem AS NI
        INNER JOIN SummaryList as SL
            ON NI.ParentID = SL.ItemID
    )

    SELECT
        SL.ItemID,
        SL.ParentOrChild,
        SL.CreationDate,
        SL.OrderNumber,
        SL.SortOrder
    FROM SummaryList AS SL
    ORDER BY SL.SortOrder

This query returns the following with all the data correctly ordered

ItemID ParentOrChild CreationDate OrderNumber SortOrder
22F72F9E-E34C-45AB-A4D9-C7D9B742CD2C P 29 October 2014 1 1000000
E0B74D61-4B69-46B0-B0A9-F08BE2886675 L 29 October 2014 1.1 1001000
CB90233C-4940-4312-81D1-A26CB540DF2A L 29 October 2014 1.2 1002000
35CCC2A1-E00F-43C6-9CB3-732342EE18DA L 29 October 2014 1.3 1003000
7A920ABE-A2E2-4CF1-B36E-DE177A7B8681 L 29 October 2014 1.3.1 1003001
C5E863A1-5A92-4F64-81C6-6946146F2ABA L 29 October 2014 1.3.2 1003002
23D89CFF-C9A3-405E-A7EE-7CAACCA58CC2 L 29 October 2014 1.3.3 1003003
CE4F9F6B-3A58-4F78-9C1F-4780883F6995 L 29 October 2014 1.3.4 1003004
8B2A137F-C311-419A-8812-76E87D8CFA40 L 29 October 2014 1.3.5 1003005
F8487463-302E-4225-8A06-7C8CDCC23B45 L 29 October 2014 1.3.6 1003006
D365A402-D3CC-4242-B1B9-356FB41AABC1 L 29 October 2014 1.3.7 1003007
DFD4D688-080C-4FF0-B1D0-EBE63B6F99FD L 29 October 2014 1.3.8 1003008
272A46C6-E326-47E8-AEE4-952AF746A866 L 29 October 2014 1.3.9 1003009
F073AFFA-F9A1-46ED-AC4B-E92A7160EB21 L 29 October 2014 1.3.10 1003010
140744E7-4950-43F8-BA0C-0E541550F14B L 29 October 2014 1.3.11 1003011
93AA3C05-E95A-4201-AE03-190DDBF31B47 L 29 October 2014 1.3.12 1003012
5CED791D-4695-440F-ABC4-9127F1EE2A55 L 29 October 2014 1.4 1004000
FBC38F00-E2E8-4724-A716-AE419907A681 L 29 October 2014 1.5 1005000
2862FED9-8916-4139-9577-C858F75C768A P 29 October 2014 2 2000000
8265A2BE-D2DD-4825-AE0A-7930671D4641 P 29 October 2014 3 3000000

Visual Studio crashes when trying to add an item to a TFS build workflow

There has for a long time been an issue that when you try to add a new activity to the toolbox when editing a TFS build workflow Visual Studio can crash. I have seen it many times and never got to the bottom of it. It seems to be machine specific, as one machine can work while another supposedly identical will fail, but I could never track down the issue.

Today I was on a machine that was failing, but …

But I found a workaround in a really old forum post. The workaround is to load the IDE from the command line with the /safemode flag

C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv.exe /safemode

Once you do this you can edit the contents of your toolbox without crashes, and also your template if you wish. The best part is that once you exit the IDE and reload it as normal your new toolbox contents are still there.

Not perfect, but a good workaround

Getting Release Management to fail a release when using a custom PowerShell component

If you have a custom PowerShell script you wish to run you can create a tool in release Management (Inventory > Tools) for the script which deploys the .PS1, PSM files etc. and defines the command line to run it.

The problem we hit was that our script failed, but did not fail the build step as the PowerShell.EXE running the script exited without error. The script had thrown an exception which was in the output log file, but it was marked as a completed step.

The solution was to use a try/catch in the .PS1  script that as well as writing a message to Write-Error also set the exit code to something other than 0 (Zero). So you end up with something like the following in your .PS1 file

param
(
[string]$Param1 ,
[string]$Param2 )

try
{
    # some logic here

} catch
{
    Write-Error $_.Exception.Message
    exit 1 # to get an error flagged so it can be seen by RM
}

Once this change was made an exception in the PowerShell caused the release step to fail as required. The output from the script appeared as the Command Output.