A model for chat and messaging bot discoverability in the enterprise

Chat and messaging bot design is still in it’s infancy, but a widely adopted principle is that bots should have a very focused application.

Make it quick and easy to accomplish a task, and limit the opportunities for users to get lost or reach conversational dead ends.

This is generally a good principle to follow, however it presents some challenges to organisations who have a range of applications they want to provide a bot interface to. For instance, how do users discover bots? How can information be shared between bots to minimise repetitive data capture?

One way to address this is through a ‘god bot’ such as Google Assistant, Alexa, Siri, and Cortana.

God-bots have a broad range of capabilities and are typically considered Channels in their own right. They may provide a way for third parties to extend this capability set using custom Skills. At present these extensions only support the limited command-and-control pattern. This is great for checking the weather, or finding somewhere to eat, but any applications that require storing conversational state will be limited.

Monolithic or discrete composition?

So, if it is agreed that chatbots should be focused applications then does that mean we have to accept the limitations of these focused applications? Or, does it mean that there needs to be a shift to god-bots, accepting the limitations of command-and-control?

If we take an example of a local council, an organisation that offers a wide range of services, and needs to support those services. How can these wide ranging services be supported while still being convenient to users?

One of the ideas we are currently exploring is the idea of a discovery bot, which is responsible for directing users to the application of their choice. The first time a user engages with the council with an enquiry, for example a ‘Blue Badge’ enquiry, the discovery bot can use menus or Natural Language Processing (NLP) to understand the users intent, before redirecting the user to the appropriate target bot.

Users are aware of which service they are talking with at all points, if the chat needs to be continued or restarted, the user will appreciate being able to go straight to that chat bot without having to navigate and rediscover.

How does this work? I’ve tried to show in the following diagram the principle of discovering a single bot, but this model scales well to any number of bots.


Support for this model is channel dependent. Basically, if a channel supports navigating to a chat bot through a URL, then the model can be implemented on that channel.

Facebook Messengers m.me links are a great example of this. They make it even easier by letting you share context using parameterised m.me links. In a later post I will share the technical detail explaining how this can be done.

Thanks for reading!

Debug your Bot with Visual Studio debugger

I previously posted using ngrok to debug your Node/C# bot, and mentioned that you can also use the Visual Debugger under certain circumstances.

These circumstances are:

  • You are using the .NET Bot Builder
  • You are using Azure which supports remote debugging
  • You have a DLL that contains debugging information – typically this means a Debug build

If all these are true then you can download the Azure SDK and attach your local Visual Studio environment to the remote process.

See how in this short video:

Use NuGet with Azure Functions

Azure Functions are “serverless” pieces of functionality. You can take your existing C# or JavaScript code and it becomes a single unit of maintenance, upgrade, scale etc.
One of the key differences is the way that code is authored out of the box – although you can use an IDE like Visual Studio you can also use the browser as your IDE.
There are a few nuances that you need to be aware of – such as adding NuGet packages.
It’s easy once you know how though! see how you can do this in this short video.

The configuration I pasted in to my project.json is here:

 {  "frameworks":   {     "net46":   {     "dependencies":    {     "Newtonsoft.Json" : "9.0.1"     }   } } 

Debug your Bot with ngrok

Once you have deployed your Bot to Azure, what do you do if you need to debug or diagnose any issues with the Bot code?

If you are using the .NET Bot Builder you can use the Visual Studio remote debugger and attach your local debugger in Visual Studio to the remote process.  Azure supports this but other hosting providers do not, and of course your Bot needs to be .NET and you need to have debugging symbols available. 

What do you do if:

  • you are using Node?
  • or, do not have debugging symbols?
  • or, hosting your Bot application with a provider that doesn’t support the VS Remote Debugger?

Well you are in luck because you can use ngrok to deal with all these constraints. 

Ngrok provides a secure tunnel to your local machine via a publicly accessible endpoint.

In this short video see how you can use it to debug your Bot application locally. ngrok will work with any language/any technology using common transport protocols (like HTTP).

So, to summarise:

  1. Download ngrok from https://ngrok.com/
  2. Extract the zip file to a folder of your choosing.
  3. Open a command prompt in the above folder, and run
    ngrok http 80

    (change 80 to the local port you want to expose)

  4. Change your client application to point to the ngrok.io endpoint
  5. Test and debug!

Azure Web Apps-Deploying Java Servlets to Azure App Service Web Apps

If you are considering to move to hosting your websites in Azure but either have a lot of legacy applications written in Java or your organisation is Java focussed, then Azure App Services provide the option to host Java code (Java Servlets, JSPs etc.) in the same way that they can host .NET code (ASP.NET Web Api, Forms, MVC etc.)

To test this, I have taken a pre-built WAR file containing a single Java Servlet, and see how much effort was required to host it in an Azure Web App.

The approach to hosting Java is as follows:

1. Create the Web App.

2. Go into the Web App, Enable the Java runtime and select your application server (Tomcat and Jetty are available).

image image

3. Upload your WAR file to the Web App. I chose FTP, but there are a number of options for publishing.  To reiterate, the process of publishing a Java Web App is exactly the same as if you were publishing a .NET Web App (except that you don’t have the option of using Visual Studio to publish).  Note: put your WAR file in the “site\wwroot\webapps” folder.  This isn’t immediately obvious and can be one of two places depending on how the web app was provisioned.  See this article for more information.


4. Confirm it as running.

That’s all there is to it.

Granted, this is a simple scenario, but Azure web apps have the capability to reach on to your on-premise network using things like Site-to-site VPN, ExpressRoute or Hybrid Connections to give you access to resources like databases, line-of-business systems etc. on your network.

Azure Logic Apps-The template language function ‘json’ parameter is not valid.

This is a follow up from my original blog post Azure Logic Apps–Parsing JSON message from service bus

If you see the following error from a Logic App using Service Bus trigger:

{"code":"InvalidTemplate","message":"Unable to process template language expressions in action 'Next_action' inputs at line '1' and column '11': 'The template language function 'json' parameter is not valid. The provided value 'eydlbWFpbCcgOiAndGVzdEB0ZXN0LmNvbSd9' cannot be parsed: 'Unexpected character encountered while parsing value: e. Path '', line 0, position 0.'. Please see https://aka.ms/logicexpressions#json for usage details.'."}

It’s because messages that from service bus topic triggers are now presented as base 64 encoded messages rather than plain text.  Fortunately the fix is straightforward, Logic Apps give you a decodeBase64 function that you can use in the Logic App. 

You need to switch the Logic App to to code view, and use the decodeBase64 function before parsing the json.  So


becomes (note the call to decodeBase64 before the call to json):


UWP-Using WebAccountManager API to connect your Windows 10 to Microsoft Account

Prior to Windows 10 UWP the recommendation for federated authentication in Windows was (and still is) to use ADAL.net.

If you have a Windows 10 UWP application you have a new platform capability available to you called the WebAccountManager. This is the recommended approach going forward from Windows 10.

The Windows 10 UWP samples available on github https://github.com/Microsoft/Windows-universal-samples contain a sample WebAccountManagement which shows you how to integrate your app with Azure AD, Live connect, Local, and other identity providers.

I’ve been through the sample and distilled the key points for Microsoft Account integration.

  1. Provide a handler for AccountsSettingsPane.GetForCurrentView().AccountCommandsRequested.  This is the method that windows will execute when the ‘login’ dialog is shown in your app.  At this stage you add the identity providers you want users to be able to log in with.
  2. Per provider, provide a handler for WebAccountProviderCommand. This is the method that windows will execute when the user has selected the identity provider from the list defined at stage 1.  At this stage, you will be physically issuing a login request using the WebAuthenticationCoreManager.  At the end of this handler, you should have a login result (success/fail) and a token which you can use in downstream processing.
  3. From your client (UWP app), tell the app to show the action settings page using Windows.UI.ApplicationSettings.AccountsSettingsPane.Show().  This will start the login process, and accordingly trigger the above handlers when users have interacted with the account settings pane.

   Below is the code to implement steps 1 and 2.  Note in this example I am using MVVM light dispatcher (Messenger.Default.Send) to deliver an event based message to subscribers.  You can use something like event aggregator or your flavour of pub/sub framework to achieve the same result:

public class Authenticator


    public Authenticator()


        AccountsSettingsPane.GetForCurrentView().AccountCommandsRequested += Authenticator_AccountCommandsRequested;



    private async void Authenticator_AccountCommandsRequested(AccountsSettingsPane sender, AccountsSettingsPaneCommandsRequestedEventArgs e)


        AccountsSettingsPaneEventDeferral deferral = e.GetDeferral();

        var provider =

            await WebAuthenticationCoreManager.FindAccountProviderAsync(“https://login.microsoft.com”, “consumers”);

        WebAccountProviderCommand providerCommand = new WebAccountProviderCommand(provider, WebAccountProviderCommandInvoked);


        e.HeaderText = “Please select an account to log in with”;




    private async void WebAccountProviderCommandInvoked(WebAccountProviderCommand command)


        WebTokenRequest webTokenRequest = new WebTokenRequest(command.WebAccountProvider, “wl.basic”, “none”);

        WebTokenRequestResult webTokenRequestResult = await WebAuthenticationCoreManager.RequestTokenAsync(webTokenRequest);

        var token = webTokenRequestResult.ResponseData[0].Token;

        // sends a message with MVVM Light messenger. This is solution specific, typically you want to use the token to

        // query a service (for example live connect)

        Messenger.Default.Send(new AuthenticatedMessage() { Token = token});




And below is the code to implement step 3:



 Further reading

I’d encourage you to have a good look at the WebAccountManagement sample in https://github.com/Microsoft/Windows-universal-samples.  This samples contains more detail and different usage scenarios for reference.

Also have a look at Vittorio Bertocci’s post at https://blogs.technet.microsoft.com/ad/2015/08/03/develop-windows-universal-apps-with-azure-ad-and-the-windows-10-identity-api/ which gives a bit of background, explains when you should use ADAL.NET vs WebAccountmanager and has example of integrating with Azure AD.

LUIS.ai-First steps with Language Understanding Intelligent Sevice (beta)

LUIS is part of the suite of Microsoft Cognitive Services announced at build.  It allows you to process unstructured language queries and use them to interface with your applications.  It has particular relevance any business process which benefits from human interaction patterns (support, customer service etc.).  Having looked at LUIS from the point of absolute novice, I’ve put together a few points which may be useful to others.

I’ve broken this down into ‘things to do’ when getting started. To illustrate the steps, I’ll use the very simple example of ordering a sandwich.  A sandwich has a number of toppings and can also have sauce on it. 

The role of LUIS is to take plain English and parse it into a structure that my app can process.  When I say “can I order a bacon sandwich with brown sauce” I want LUIS to tell me that a) an order has been placed, b) the topping is bacon, c) the sauce is brown.  Once LUIS has provided that data then my app can act accordingly.

So for LUIS to understand the sandwich ordering language: You need to define entities, define intents and then train and test the model before using it.  Read on to understand what I mean by these high level statements. 

1. Define your ‘entities’

These are the ‘things’ you are talking about.  For me, my entities are ‘sauce’, and ‘topping’.  I also have ‘extra’ which is a generalisation of any other unstructured information the customer might want to provide – so things like Gluten free, no butter etc.

2. Define your ‘intents’

These are the context in which the entities are used.  For me, I only have one intent defined which is ‘order’.

3. Test/train the model – use ‘utterances’

After entities and intents have been defined you can begin to test the model. 

Initially, the untrained LUIS will be really bad at understanding you.  That’s the nature of machine learning. But as it is trained with more language patterns and told what these mean it will become increasingly accurate.

To begin the process LUIS needs to be interacted with.  LUIS calls interactions ‘utterances’.  An utterance is an unstructured sentence that hasn’t been processed in any way.

In the portal you can enter an utterance to train or test the model. 

Here, I am adding the utterance “can I order a sausage sandwich with tomato sauce”.  I’ll select the entities that are part of that utterance (sausage and tomato sauce) and tell LUIS what they are.


You can repeat this process with as many variations of language as possible, for example “give me a bacon sandwich”, and  “I want a sausage sandwich with brown sauce” etc. It’s recommended to try this exercise with different people as different people will say the same thing with unique speech patterns. The more trained variations the better, basically.  You can, and will come back to train it later though, so don’t feel it has to be 100% at this stage.

Once you go live with the model – LUIS will come across patterns that it cannot fully process, for this the feedback loop for training is very important.  LUIS will log all the interactions it has had, you can access them using the publish button.


These logs are important as they give you insight into your customers language.  You should use this data to train LUIS and improve its future accuracy.

4. Use the model

Finally, and I guess most importantly you need to use the model. If you look in the screenshot above there is a box where you can type in a query, aka an utterance. The query string will look something like this:


This basically issues a HTTP GET against the LUIS API and returns the processed result in a JSON object. 


I’ve annotated the diagram so you can see:

A) the query supplied to LUIS.

B) the topping entity that it picked out.

C) the sauce entity that it picked out.

In addition to this, you will see other things such as the recognised intent, the confidence of the results, etc.  I encourage you to explore the structure of this data.  You can use this data in any application that can issue a HTTP get, and process them accordingly. 

I’ll write later on bot framework which has built in forms engine to interface with LUIS models, enhancing the language capabilities with structured processing.

This is a simple example but hopefully it shows the potential use cases for this, and gives you some pointers to get started.

Azure Logic Apps-Service Bus connector not automatically triggering?

By default logic apps will be set to trigger every 60 minutes which, if you are not aware, may lead you to thinking that your logic app isn’t working at all!

As Logic Apps are preview there are some features that are not available through the designer yet, but you can do a lot through the Code view.

In this instance you can set the frequency to Second, Minute, Hour, Day, Week, Month or Year.  For a frequency of every minute it is required to be on a standard service plan or better. If your service plan doesn’t allow the frequency you will get an error as soon as you try and save the logic app.  Here’s what I set to have it run every minute.

. image

More information can be found at https://msdn.microsoft.com/en-us/library/azure/dn948511.aspx.