The blogs of Black Marble staff

Introducing NFC and Proximity with Windows Phone

Near Field Communication has been seen demonstrated in two ways. Firstly scanning an NFC enabled card which can be programmed to perform a number of different actions such as take your device to a specific url, transfer contact details, start an app or take you to the store to install the app and a number of other options. Secondly you may have seen people touching devices together to do things. These actions are either done via NFC communications in the same way that a tag works but also communication could be done using the proximity apis to create a personal network between the two devices to allow a richer sharing of data between devices. This post will cover NFC communication and will show you how you can use NFC to communicate data to devices both in a static way or programmatically as part of an app.

The simplest way to use NFC to exchange data Is via a physical NFC tag. These tags can be programmed with a range of information such as redirecting your mobile browser to a web page; sharing contact details or starting or installing an app. Each of these can be programmed using an app like the Nokia NFC Writer app documented here.

An NFC enabled device will automatically try and resolve the NFC tag data but it is also possible to have custom messages within a tag. Each tag has different capabilities and data sizes which can be seen here

The data that can be written to a tag can also be seen here

The common tag protocols are : WindowsUri, WindowsMime & NDEF.

NDEF is the Nfc Data Exchange format and is the Raw message format for NFC. This can be used to create custom messages for exchanging other types of data. Information about NDEF can be found here

What is more interesting though is that your phone is not only a reader of NFC data it can also be a sender of data so that the tap and send scenarios can be easily enabled within apps, providing that your device supports NFC.

The code samples below utilise the .NET Proximity APIs in C# and will show you how to add basic send and receive capability to an application.

Firstly you will need to determine whether the device your app is running on supports proximity. This is done by get hold of the default proximity device using:

ProximityDevice device = ProximityDevice.GetDefault();

If the device returned is null then you do not have NFC enable or your device down not have NFC capabilities. You can check to see if you have NFC on your phone in settings. There should be a section on NFC with a slider to turn NFC on or off. Make sure that it is turned on. You will also need to ensure that the capabilities of your app allows proximity and this is done in the app manifest file (ID_CAP_PROXIMITY)

Once you have your proximity device there are a number of events you can wire up which will get fired when another NFC device arrives whether that’s a static tag or an NFC enabled phone or tablet. These are DeviceArrived and DeviceDeparted and they fire as you would expect when the NFC device is tapped and when it is removed. These two event give you a notification about the arrival or departure but not the actual data. For this you will need to subscribe for messages and set up call backs to receive and process the NFC data. Similarly you can use the DeviceArrived event to allow you to send data to the other device.

To receive data you do not need to handle the DeviceArrived or DeviceDeparted events but you will need to subscribe for the set of messages you wish to receive including custom messages. You can create different handlers for each type of message you wish to receive.

long subscriptionID = device.SubscribeForMessage("Windows.BlackMarbleMessage", messageReceived);

Here I am subscribing to a custom message of type Windows.BlackMarbleMessage which my messageReceived method knows how to process:

private void messageReceived(ProximityDevice sender, ProximityMessage message)

    Debug.WriteLine("Received from {0}:'{1}'", sender.DeviceId, message.DataAsString);

My messageReceived method just takes the content of the message and displays it on the screen, but in a real life scenario the data could be JSON or XML for example and you process them accordingly.

The standard set of tag protocols is documented here if you want to access the raw NDEF protocol then the codeplex project here can be used

An application cannot determine whether the device arrived is an active device or just a passive tag so if you want to send messages out you need to understand that there may not be a response so make sure that your app is not expecting a response in order to proceed.

Sending data can be achieved by sending a message to the proximity device when you know that a device has arrived, so for this you will need to wire up the DeviceArrived event.

device.DeviceArrived += device_DeviceArrived;


void device_DeviceArrived(ProximityDevice sender)
    if (_bSendingMessage)
                _messageToSend = "Hello World";
        sender.PublishMessage("Windows.BlackMarbleMessage", _messageToSend);

This example is part of a phone app which needs to check a check box and enter some data in a text box in order to send data. Sending data is done through the PublishMessage, PublishBinaryMessage or PublishUriMessage methods. If you use PublishMessage or PublishBinaryMessage then you will need to add a string identifier for the message type which needs to match the message type that the receiver has subscribed to. In my example this is "Windows.BlackMarbleMessage".

Let's assume that we've installed this app onto two Windows 8 phones, one of the phones has the send message check box checked and has some data in the data check box. The two phones are Tapped together. Both apps will receive the DeviceArrived event and the device_DeviceArrived method is called on both apps. The phone with the send message check box checked will then send a Windows.BlackMarble message containing the data in the data text box to the other phone. The other phone then receives the message in its messageReceived method and will display it on the screen.

This is a basic example but it shows how you can send through simple message data and to add Tap functionality to you apps. This should work across devices and operating systems as the communication is through NFC (although you will need an app that understands your messages if you are sending custom messages).

NFC and proximity can be used to set up a longer running connection between two devices with or without tapping to initiate. There will be a follow on post that covers this shortly.

What new in TFS from Teched 2014?

If you use TFS then it is well worth a look at Brian Harry’s Teched2014 session ‘Modern Application Lifecycle Management’. It goes through changes and new features with TFS both on-premise and in the cloud, including

Not all these features are in 2013.2 (which was released during the conference). However, in the session they said Visual Studio 2013.3CTP is going to be available next week, so not long to wait if you want a look at the latest features.

New release of TFS Alerts DSL that allows work item state rollup

A very common question I am asked at clients is

“Is it possible for a parent TFS work item to be automatically be set to ‘done’ when all the child work items are ‘done’?”.

The answer is not out the box, there is no work item state roll up in TFS.

However it is possible via the API. I have modified my TFS Alerts DSL CodePlex project to expose this functionality. I have added a couple of methods that allow you to find the parent and child of a work item, and hence create your own rollup script.

To make use of this all you need to do is create a TFS Alert that calls a SOAP end point where the Alerts DSL is installed. This end point should be called whenever a work item changes state. It will in turn run a Python script similar to the following to perform the roll-up

import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv[0] == "WorkItemEvent" :
    wi = GetWorkItem(int(sys.argv[1]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage("Work item '" + str(wi.Id) + "' has no parent")
        LogInfoMessage("Work item '" + str(wi.Id) + "' has parent '" + str(parentwi.Id) + "'")

        results = [c for c in GetChildWorkItems(parentwi) if c.State != "Done"]
        if len(results) == 0:
            LogInfoMessage("All child work items are 'Done'")
            parentwi.State = "Done"
            msg = "Work item '" + str(parentwi.Id) + "' has been set as 'Done' as all its child work items are done"
            SendEmail("richard@typhoontfs","Work item '" + str(parentwi.Id) + "' has been updated", msg)
            LogInfoMessage("Not all child work items are 'Done'")
    LogErrorMessage("Was not expecting to get here")

So now there is a fairly easy way to create your own rollups, based on your own rules

Getting ‘The build directory of the test run either does not exist or access permission is required’ error when trying to run tests as part of the Release Management deployment

Whilst running tests as part of a Release Management deployment I started seeing the error ‘The build directory of the test run either does not exist or access permission is required’, and hence all my tests failed. It seems that there are issues that can cause this problem, as mentioned in the comments in Martin Hinshelwood’s post on running tests in deployment, specially spaces in the build name can cause this problem, but this was not the case for me.

Strangest point was it used to work, what had I changed?

To debug the problem I logged into the test VM as the account the deployment service was running as (a shadow account as the environment was network isolated). I got the command line that the component was trying to run by looking at the messages in the deployment log


I then went to the deployment folder on the test VM

%appdata%\local\temp\releasemanagement\[the release management component name]\[release number]

and ran the same command line. Strange thing was this worked! All the tests ran and passed OK, TFS was updated, everything was good.

It seemed I only had an issue when triggering the tests via a Release Management deployment, very strange!

A side note here, when I say the script ran OK it did report an error and did not export and unpack the test results from the TRX file to pass back to the console/release management log. Turns out this is because the MTMExec.ps1 script uses the command [System.IO.File]::Exist(..) to check if the .TRX file has been produced. This fails when the script is run manually. This is because it relies on [Environment]::CurrentDirectory, which is not set the same way when run manually as when a script is called by the deployment service. When run manually it seems to default to c:\windows\system32 not the current folder.

If you are editing this script, and want it to work in both scenarios, then probably best to use the PowerShell Test-Path(..) cmdlet as opposed to [System.IO.File]::Exist(..) 

So where to look for this problem, the error says something can’t access the drops location, but what?

A bit of thought as to who is doing what can help here


When the deployment calls for a test to be run

  • The Release Management deployment agent pulls the component down to the test VM from the Release Management Server
  • It then runs the Powershell Script
  • The PowerShell script runs TCM.exe to trigger the test run, passing in the credentials to access the TFS server and Test Controller
  • The Test Controller triggers the tests to be run on the Test Agent, providing it with the required DLLs from the TFS drops location – THIS IS THE STEP WITH THE PROBLEM IS SEEN
  • The Test Agent runs the tests and passes the results back to TFS via the Test Controller
  • After the PowerShell script triggers the test run it loops until the test run is complete.
  • It then uses TCM again to extract the test results, which it parses and passes back to the Release Management server

So a good few places to check the logs.

Turns out the error was being reported on the Test Controller.


(QTController.exe, PID 1208, Thread 14) Could not use lab service account to access the build directory. Failure: Network path does not exist or is not accessible using following user: \\store\drops\Sabs.Main.CI\Sabs.Main.CI_2.3.58.11938\ using blackmarble\tfslab. Error Code: 53

The error told me the folder and who couldn’t access it, the domain service account ‘tfslab’ the Test Agents use to talk back to the Test Controller.

I checked the drops location share and this user has adequate access rights. I even logged on to the Test Controller as this user and confirmed I could open the share.

I then had a thought, this was the account the Test Agents were using to communicate with the Test Controller, but was it the account the controller was running as? A check showed it was not, the controller was running as the default ‘Local System’. As soon as I swapped to using the lab service account (or I think any domain account with suitable rights) it all started to work.


So why did this problem occur?

All I can think of was that (to address another issue with Windows 8.1 Coded-UI testing) the Test Controller was upgraded to 2013.2RC, but the Test Agent in this lab environment was still at 2013RTM. Maybe the mismatch is the issue?

I may revisit and retest with the ‘Local System’ account when 2013.2 RTM’s and I upgrade all the controllers and agents, but I doubt it. I have no issue running the test controller as a domain account.

Setting the LocalSQLServer connection string in web deploy

If you are using Webdeploy you might wish to alter the connection string the for the LocalSQLServer that is used by the ASP.NET provider for web part personalisation. The default is to use ASPNETDB.mdf in the APP_Data folder, but in a production system you could well want to use a ‘real’ SQL server.

If you look in your web config, assuming you are not using the default ‘not set’ setting, will look something like

    <clear />
    <add name="LocalSQLServer" connectionString="Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" providerName="System.Data.SqlClient" />

Usually you expect any connection strings in the web.config to appear in the Web Deploy publish wizard, but it does not. I have no real idea why, but maybe it is something to do with having to use <clear /> to remove the default?


If you use a parameters.xml file to add parameters to the web deploy you would think you could add the block

<parameter name="LocalSQLServer" description="Please enter the ASP.NET DB path" defaultvalue="__LocalSQLServer__" tags="">
  <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add[@name='LocalSQLServer']/@connectionString" />

However, this does not work, in the setparameters.xml that is generated you find  two entries, first yours then the auto-generated one, and the last one wins, so you don’t get the correct connection string.

<setParameter name="LocalSQLServer" value="__LocalSQLServer__" />
<setParameter name="LocalSQLServer-Web.config Connection String" value="Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" />

The solution I found manually add your parameter in the parameters.xml file as

<parameter name="LocalSQLServer-Web.config Connection String" description="LocalSQLServer Connection String used in web.config by the application to access the database." defaultValue="__LocalSQLServer__" tags="SqlConnectionString">
  <parameterEntry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add[@name='LocalSQLServer']/@connectionString" />

With this form the connection string was correctly modified as only one entry appears in the generated file

Changing WCF bindings for MSDeploy packages when using Release Management

Colin Dembovsky’s excellent post ‘WebDeploy and Release Management – The Proper Way’ explains how to pass parameters from Release Management into MSDeploy to update Web.config files. On the system I am working on I also need to do some further web.config translation, basically the WCF section is different on a Lab or Production build as it needs to use Kerberos, whereas local debug builds don’t.

In the past I dealt with this, and editing the AppSettings, using MSDeploy web.config translation. This worked fine, but it meant I built the product three time, exactly what Colin’s post is trying to avoid. The techniques in the post for the AppSettings and connection strings are fine, but don’t apply so well for the large block swapouts, as I need for WCF bindings section.

I was considering my options when I realised there a simple option.

  • My default web.config has the bindings for local operation i.e. no Kerberos
  • The web.debug.config translation hence does nothing
  • Both web.lab.config and web.release.confing translations have Kerberos bindings swapped out

So all I needed to do was build the Release build (as you would for production release anyway) this will have the correct bindings in the MSDeploy package for both Lab and Release. You can then use Release Management to set the AppSettings and connection strings as required.

Simple, no extra handling required. I had thought my self into a problem I did not really have.

Release Management components fail to deploy with a timeout if a variable is changed from standard to encrypted

I have been using Release Management to update some of our internal deployment processes. This has included changing the way we roll out MSDeploy packages; I am following Colin Dembovsky’s excellent post of the subject.

I hit an interesting issue today. One of the configuration variable parameters I was passing into a component was a password field. For my initial tests had just let this be a clear text ‘standard’ string in the Release Management. Once I got this all working I thought I better switch this variable to ‘encrypted’, so I just change the type on the Configuration Variables tab.


On doing this I was warned that previous deployment would not be re-deployable, but that was OK for me, it was just a trial system. I would not be going back to older versions.

However when I tried to run this revised release template all the steps up to the edited MSDeploy step were fine, but the MSDeploy step never ran it just timed out. The component was never deployed to the target machine %appdata%\local\temp\releasemanagement folder.


In the end, after a few reboots to confirm the comms were OK, I just re-added the component to the release template and entered all the variables again. It then deployed without a problem.

I think this is a case of a misleading error message.