But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Getting the domain\user when using versionControl.GetPermissions() in the TFS API

 

If you are using the TFS API to get a list of user who have rights in a given version control folder you need to be careful as you don’t get back the domain\user name you might expect from the GetPermissions(..) call. You actually get the display name. Now that might be fine for you but I needed the domain\user format as I was trying to populate a peoplepicker control.

The answer is you need to make a second call to the TFS IIdentityManagementService  to get the name in the form you want.

This might not be best code, but shows the steps required

private List<string> GetUserWithAccessToFolder(IIdentityManagementService ims, VersionControlServer versionControl, string path)
{
    var users = new List<string>();
    var perms = versionControl.GetPermissions(new string[] { path }, RecursionType.None);
    foreach (var perm in perms)
    {
        foreach (var entry in perm.Entries)
        {
                var userIdentity = ims.ReadIdentity(IdentitySearchFactor.DisplayName,
                                                        entry.IdentityName,
                                                        MembershipQuery.None,
                                                        ReadIdentityOptions.IncludeReadFromSource);

                users.Add(userIdentity.UniqueName);
          }
    }

    return users;
}

A hair in the gate

My Arc mouse started behaving strangely today, very jumpy. Felt like the cursor was being pulled left. Turns out the problem was a tiny hair caught in the led sensor slot

image

You could see there was a problem as the led was flashing a lot, when it is normally solidly on if turn over the mouse you look into the slot.

Once I got it out all was fine again

Fix for 0xc00d36b4 error when play MP4 videos on a Surface 2

Whilst in the USA last week I bought a Surface 2 tablet. Upon boot it ran around 20 updates, as you expect, but unfortunately one of these seemed to remove its ability to play MP4 videos, giving a 0xc00d36b4 error whenever you try. A bit of a pain as one of the main reasons I wanted a tablet was for watching training videos and PluralSight on the move.

After a fiddling and hunting on the web I found I was not alone, so I added my voice to the thread, and eventually an answer appeared. It seems the Nvidia Audio Enhancements seem to be the problem. I guess they got updated within the first wave of updates.

So the fix is according to the thread is as follows

  1. Go to the desktop view on your Surface
  2. Tap and hold the volume icon. 
  3. Select sounds from the pop up menu - I only had to go this far as a dialog appeared asking of I wished to disable audio enhancements (maybe it found it was corrupt)
  4. Go to the playback tab
  5. Highlight the speakers option
  6. Select properties
  7. Go to the enhancements tab
  8. Check the "Disable all enhancements" box
  9. Tap OK.

And videos should now play

Updated 2 Dec  2013 Seems you have to make this change for each audio device, this means speaker AND headphones

Fixing a WCF authentication schemes configured on the host ('IntegratedWindowsAuthentication') do not allow those configured on the binding 'BasicHttpBinding' ('Anonymous') error

Whilst testing a WCF web service I got the error

The authentication schemes configured on the host ('IntegratedWindowsAuthentication') do not allow those configured on the binding 'BasicHttpBinding' ('Anonymous'). Please ensure that the SecurityMode is set to Transport or TransportCredentialOnly. Additionally, this may be resolved by changing the authentication schemes for this application through the IIS management tool, through the ServiceHost.Authentication.AuthenticationSchemes property, in the application configuration file at the <serviceAuthenticationManager> element, by updating the ClientCredentialType property on the binding, or by adjusting the AuthenticationScheme property on the HttpTransportBindingElement.

Now this sort of made sense as the web services was mean to be secured using Windows Authentication, so the IIS setting was correct, anonymous authentication was off

image

Turns out the issue was, as you might expect, an incorrect web.config entry

  <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding name="windowsSecured"> <!—this was the problem –>
          <security mode="TransportCredentialOnly">
            <transport clientCredentialType="Windows" />
          </security>
        </binding>
      </basicHttpBinding>
  </bindings>
    <services>
      <service behaviorConfiguration="CTAppBox.WebService.Service1Behavior" name="CTAppBox.WebService.TfsService">
        <endpoint address="" binding="basicHttpBinding"  contract="CTAppBox.WebService.ITfsService">
          <identity>
            <dns value="localhost"/>
          </identity>
        </endpoint>
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
      </service>
    </services>
    <behaviors>
      <serviceBehaviors>
        <behavior name="CTAppBox.WebService.Service1Behavior">
          <!-- To avoid disclosing metadata information, set the value below to false before deployment -->
          <serviceMetadata httpGetEnabled="true"/>
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
          <serviceDebug includeExceptionDetailInFaults="true"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>

The problem was the basicHttpBinding had a named binding windowsSecured and no non-named default. When the service was bound to the binding it did not use the name binding, just the defaults (which were not shown in the config file).

The solution was to remove the name="windowsSecured" entry, or we could have added a name to the service binding

When your TFS Lab test agents can’t start check the DNS

Lab Management has a lot of moving parts, especially if you are using SCVMM based environments. All the parts have to communicate if the system is work.

One of the most common problem I have seen are due to DNS issues. A slowly propagating DNS can cause chaos as the test controller will not be able to resolve the name of the dynamically registered lab VMs.

The best fix is to sort out your DNS issues, but that is not always possible (some things just take the time they take, especially on large WANs).

An immediate fix is to use the local host files on the test controller to define IP address for the lab[guid].corp.domain names created when using network isolation. Once this is done the handshake between the controller and agent is usually possible.

If it isn’t then you are back to all the usually diagnostics tools

More on TF215106: Access denied from the TFS API after upgrade from 2012 to 2013

In my previous post I thought I had fixed my problems with TF215106 errors

"TF215106: Access denied. TYPHOONTFS\\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator."}

Turns out I had not, acutally I not idea why it worked for a while! There could well be an API version issue, but I had to actually also missed I needed to do what the error message said!

If you check MSDN, it tells you how to check the permissions for a given build; on checking I saw that the update build information permission was not set for the build in question.

image

Once I set it for the domain account my service was running as, everything worked as expected.

All I can assume that there is a change from TSF 2012 to 2013 over defaulting the permission as I have not needed to set it explicitly in the past

Changes in Skydrive access on Windows 8.1

After upgrading to Windows 8.1 on my Media Center PC I have noticed a change in SkyDrive. The ‘upgrade’ process from 8 to 8.1 is really a reinstall of the OS and reapplication of Windows 8 applications. Some Windows desktop applications are removed. In the case of my Media Center PC the only desktop app installed was Windows Desktop Skydrive that I used to sync photos from my MediaPC to the cloud. This is no longer needed as Windows 8.1 exposes the Skydrive files linked to the logged in LiveID as folders under the c:\user\[userid]\documents folder, just like Windows Desktop client used to do.

This means though the old dekstop Skydrive client has been removed my existing timer based jobs that backup files to the cloud by copying from a RAID5 box to the local Skydrive folder still work.

A word of warning here though, don’t rely on this model as you only backup. There is a lot of ransonware around at the moment and if you aren't careful an infected PC can infect your automated cloud backup too. Make your you cloud backup is versioned so you can old back to a pre-infected file and/or you have a more traditional offline backup too.

Can I use the HP ALM Synchronizer with TF Service?

I recently tried to get the free HP ALM Synchronizer to link to Microsoft’s TF Service, the summary is it does not work. However, it took me a while to realise this.

The HP ALM Synchronizer was designed for TFS 2008/2010 so the first issue you hit is that TF Services is today basically TFS 2013 (and is a moving goal post as it is updated so often). This means when you try to configure the TFS connection in HP ALM Synchronizer  it fails because it cannot see any TFS client it supports. This is fairly simple to address, just install Visual Studio Team Explorer 2010 and patch it up to date so that it can connect to TF Service (you could go back to the 2008 and achieve the same if you really needed to)

Once you have a suitably old client you can progress to the point that it asks you for your TFS login credentials. HP ALM Synchronizer validates they are in the form DOMAIN\USER, this is a problem.

On TF Service you usually login with a LiveID, this is a non-starter in this case. However, you can configure alternative credentials, but these are in the form USER and PASSWORD. The string pattern verification on the credentials entry form in  HP ALM Synchronizer does not accept them, it must have a domain and slash. I could not find any pattern at satisfies both TF Service and the HP ALM Synchronizer setup tool. So basically you are stuck.

So for my client we ended up moving to a TFS 2013 Basic Install on premises and at all worked fine, they could sync HP QC defects into TFS using the HP ALM Synchronizer, so they were happy.

However, is there a better solution? One might be to a commercial product such as Tasktop Sync, this is designed to provide synchronisation services between a whole range of ALM like products. I need to find out if that supports TF Service as yet?

Problems with Microsoft Fake Stubs and IronPython

I have been changing the mocking framework used on a project I am planning to open source. Previously it had been using Typemock to mock out items in the TFS API. This had been working well but used features of the toolset that are only available on the licensed product (not the free version). As I don’t like to publish tests people cannot run I thought it best to swap to Microsoft Fakes as there is a better chance any user will have a version of Visual Studio that provides this toolset.

Most of the changes were straightforward but I hit a problem when I tried to run a test that returned a TFS IBuildDetails object for use inside an IronPython DSL.

My working Typemock based test was as follows

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
        // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();

         var tfsProvider = Isolate.Fake.Instance<ITfsProvider>();
         var emailProvider = Isolate.Fake.Instance<IEmailProvider>();
         var build = Isolate.Fake.Instance<IBuildDetail>();

         var testUri = new Uri("vstfs:///Build/Build/123");
         Isolate.WhenCalled(() => build.Uri).WillReturn(testUri);
         Isolate.WhenCalled(() => build.Quality).WillReturn("Test Quality");
         Isolate.WhenCalled(() => tfsProvider.GetBuildDetails(null)).WillReturn(build);

         // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);

        // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

Swapping to Fakes I got

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
        // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();
         var testUri = new Uri("vstfs:///Build/Build/123");

         var emailProvider = new Providers.Fakes.StubIEmailProvider();
         var build = new StubIBuildDetail()
                         {
                             UriGet = () => testUri,
                             QualityGet = () => "Test Quality",
                         };
         var tfsProvider = new Providers.Fakes.StubITfsProvider()
         {
             GetBuildDetailsUri = (uri) => (IBuildDetail)build
         };

         // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);
     

        // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

But this gave me the error

Test Name:    Can_use_Dsl_to_get_build_details
Test FullName:    TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details
Test Source:    c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor.Tests\Dsl\DslTfsProcessingTests.cs : line 121
Test Outcome:    Failed
Test Duration:    0:00:01.619

Result Message:    System.MissingMemberException : 'StubIBuildDetail' object has no attribute 'Uri'
Result StackTrace:   
at IronPython.Runtime.Binding.PythonGetMemberBinder.FastErrorGet`1.GetError(CallSite site, TSelfType target, CodeContext context)
at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1)
at Microsoft.Scripting.Interpreter.DynamicInstruction`3.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)
at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx)
at IronPython.Compiler.PythonScriptCode.Run(Scope scope)
at IronPython.Compiler.RuntimeScriptCode.InvokeTarget(Scope scope)
at IronPython.Compiler.RuntimeScriptCode.Run(Scope scope)
at Microsoft.Scripting.SourceUnit.Execute(Scope scope, ErrorSink errorSink)
at Microsoft.Scripting.SourceUnit.Execute(Scope scope)
at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope)
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, Dictionary`2 args, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor\Dsl\DslProcessor.cs:line 78
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor\Dsl\DslProcessor.cs:line 31
at TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details() in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor.Tests\Dsl\DslTfsProcessingTests.cs:line 141

If I altered my test to not use my IronPython DSL but call the C# DSL Library directly the error went away. So the issue lay in the dynamic IronPython engine – not something I am going to even think of trying to fix.

So I swapped the definition of the mock IBuildDetails to use Moq (could have used the free version of Typemock or any framework) instead of a Microsoft Fake Stubs and the problem went away.

So I had

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
         // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();
         var testUri = new Uri("vstfs:///Build/Build/123");

         var emailProvider = new Providers.Fakes.StubIEmailProvider();
         var build = new Moq.Mock<IBuildDetail>();
         build.Setup(b => b.Uri).Returns(testUri);
         build.Setup(b => b.Quality).Returns("Test Quality");

         var tfsProvider = new Providers.Fakes.StubITfsProvider()
         {
             GetBuildDetailsUri = (uri) => build.Object
         };

        // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);


         // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

So I have a working solution, but it is a bit of a bit of a mess, I am using Fake Stubs and Moq in the same test. There is no good reason not to swap all the mocking of interfaces to Moq. So going forward on this project I only use Microsoft Fakes for Shims to mock out items such as  TFS WorkItem objects which have no public constructor.

Get rid of that that zombie build

Whilst upgrading a TFS 2010 server to 2012 I had a problem that a build was showing in the queue as active after the upgrade. This build was queued in January, 10 months ago, so should have finished a long long time ago. This build had the effect that it blocked any newly queued builds, but the old build did not appear to be running on any agent – a zombie build.

I tried to stop it, delete it, everything I could think of, all to no effect. It would not go away.

In the end I had to use the brute force solution to delete the rows in the TPC’s SQL DB for the build. I did this in both the tbl_BuildQueue (use the QueueID) and tbl_Build (use the buildID) tables.