BM-Bloggers

The blogs of Black Marble staff

LUIS.ai-First steps with Language Understanding Intelligent Sevice (beta)

LUIS is part of the suite of Microsoft Cognitive Services announced at build.  It allows you to process unstructured language queries and use them to interface with your applications.  It has particular relevance any business process which benefits from human interaction patterns (support, customer service etc.).  Having looked at LUIS from the point of absolute novice, I’ve put together a few points which may be useful to others.

I’ve broken this down into ‘things to do’ when getting started. To illustrate the steps, I’ll use the very simple example of ordering a sandwich.  A sandwich has a number of toppings and can also have sauce on it. 

The role of LUIS is to take plain English and parse it into a structure that my app can process.  When I say “can I order a bacon sandwich with brown sauce” I want LUIS to tell me that a) an order has been placed, b) the topping is bacon, c) the sauce is brown.  Once LUIS has provided that data then my app can act accordingly.

So for LUIS to understand the sandwich ordering language: You need to define entities, define intents and then train and test the model before using it.  Read on to understand what I mean by these high level statements. 

1. Define your ‘entities’

These are the ‘things’ you are talking about.  For me, my entities are ‘sauce’, and ‘topping’.  I also have ‘extra’ which is a generalisation of any other unstructured information the customer might want to provide – so things like Gluten free, no butter etc.

2. Define your ‘intents’

These are the context in which the entities are used.  For me, I only have one intent defined which is ‘order’.

3. Test/train the model – use ‘utterances’

After entities and intents have been defined you can begin to test the model. 

Initially, the untrained LUIS will be really bad at understanding you.  That’s the nature of machine learning. But as it is trained with more language patterns and told what these mean it will become increasingly accurate.

To begin the process LUIS needs to be interacted with.  LUIS calls interactions ‘utterances’.  An utterance is an unstructured sentence that hasn’t been processed in any way.

In the portal you can enter an utterance to train or test the model. 

Here, I am adding the utterance “can I order a sausage sandwich with tomato sauce”.  I’ll select the entities that are part of that utterance (sausage and tomato sauce) and tell LUIS what they are.

image

You can repeat this process with as many variations of language as possible, for example “give me a bacon sandwich”, and  “I want a sausage sandwich with brown sauce” etc. It’s recommended to try this exercise with different people as different people will say the same thing with unique speech patterns. The more trained variations the better, basically.  You can, and will come back to train it later though, so don’t feel it has to be 100% at this stage.

Once you go live with the model - LUIS will come across patterns that it cannot fully process, for this the feedback loop for training is very important.  LUIS will log all the interactions it has had, you can access them using the publish button.

imageimage

These logs are important as they give you insight into your customers language.  You should use this data to train LUIS and improve its future accuracy.

4. Use the model

Finally, and I guess most importantly you need to use the model. If you look in the screenshot above there is a box where you can type in a query, aka an utterance. The query string will look something like this:

https://api.projectoxford.ai/luis/v1/application?id=<appid>&subscription-key=<subscriptionkey>&q=can%20i%20order%20a%20bacon%20sandwich%20with%20brown%20sauce

This basically issues a HTTP GET against the LUIS API and returns the processed result in a JSON object. 

image

I’ve annotated the diagram so you can see:

A) the query supplied to LUIS.

B) the topping entity that it picked out.

C) the sauce entity that it picked out.

In addition to this, you will see other things such as the recognised intent, the confidence of the results, etc.  I encourage you to explore the structure of this data.  You can use this data in any application that can issue a HTTP get, and process them accordingly. 

I’ll write later on bot framework which has built in forms engine to interface with LUIS models, enhancing the language capabilities with structured processing.

This is a simple example but hopefully it shows the potential use cases for this, and gives you some pointers to get started.

Notes from the field: Using Hyper-V Nat Switch in Windows 10

The new NAT virtual switch that can be created on Windows 10 for Hyper-V virtual machines is a wonderful thing if you're an on-the-go evangelist like myself. For more information on how to create one, see Thomas Maurer's post on the subject.

This post is not about creating a new NAT switch. It is, however, about recreating one and the pitfalls that occur, and how I now run my virtual environment with some hack PowerShell and a useful DHCP server utility.

Problems Creating Nat Switch? Check Assigned IP Addresses

I spent a frustrating amount of time this week trying to recreate a NAT switch after deleting it. Try as I might, every time I executed the command to create the new switch it would die. After trial and error I found that the issue was down to the address range I was using. If I created a new switch with a new address range everything worked, but only that one time: If I deleted the switch and tried again, any address range that I'd used would fail.

This got me digging.

I created a new switch with a new address range. The first thing I noticed was that I had a very long routing table. Get-netroute showed routes for all the address ranges I had previously created. That let me to look at the network adapter created by the virtual switch. When you create a new nat switch the resulting adapter gets the first IP address in the range bound to it (so 192.168.1.0/24 will result in an IP of 192.168.0.1). My adapter had an IP address for every single address range I'd created and then deleted.

Obviously, when the switch is removed the IP configuration is being stored by windows somewhere. When a new switch is created all that old binding information is reapplied to the new switch. I'm not certain whether this is related to the interface index, name or what, since when I remove and re-add the switch on my machine it always seems to get the same interface index.

A quick bit of PowerShell allowed me to rip all the IP addresses from the adapter at once. The commands below are straightforward. The first allows me to find the adapter by name (shown in the Network Connections section of control panel) - replace the relevant text with the name of your adapter. From that I can find the interface index, and the second command gets all the IPv4 addresses (only IPv4 seems to have the problem here) and removes them from the interface - again, swap your interface index in here. I can then use PowerShell to remove the VMswitch and associated NetNat object.

Get-NetAdapter -Name "vEthernet (NATSwitch)"
Get-NetIPAddress -InterfaceIndex 13 -AddressFamily IPv4 | Remove-NetIPAddress

Once that's done I can happily create new virtual switches using NAT and an address range I've previously had.

Using DHCP on a NAT switch for ease

My next quest was for a solution to the IP addressing conundrum we all have when running VMs: IP addresses. I could assign each VM a static address, but then I have to keep track of them. I also have a number of VMs in different environments that I want to run and I need external DNS to work. DHCP is the answer, but Windows 10 doesn't have a DHCP server and I don't want to build a VM just to do that.

I was really pleased to find that somebody has already written what I need: DHCP Server for Windows. This is a great utility that can run as a service or as a try app. It uses an ini file for configuration and by editing the ink file you can manage things like address reservations. Importantly, you can choose which interface the service binds to which means it can be run only against the virtual network and not a use issues elsewhere.

There's only one thing missing: DNS. Whilst the DHCP serer can run it's own DNS if you like, it still has a static configuration for the forwarder address. In a perfect world I'd like to be able to tell it to had my PCs primary DNS address to clients requesting an IP.

Enter PowerShell, stage left...

Using my best Google-fu I tracked down a great post by Lee Homes from a long time ago about using PowerShell to edit ini files through the old faithful Windows API calls for PrivateProfileString. I much prefer letting Windows deal with my config file than write some complex PowerShell parser.

I took Lee's code and created a single PowerShell module with three functions as per his post which I called Update-Inifiles.psm1. I then wrote another script that used those functions to edit the ini file for DHCPserver.

It's dirty and not tested on anything but my machine, but here it is:

import-module C:\src\Update-IniFiles.psm1

$dnsaddr = (Get-DnsClientServerAddress -InterfaceIndex (get-netroute -DestinationPrefix 0.0.0.0/0)[0].ifIndex -AddressFamily IPv4).ServerAddresses[0]

if ($dnsaddr.Length -gt 0)
{
Set-PrivateProfileString "C:\Program Files\DHCPSrv\dhcpsrv.ini" GENERAL DNS_0 $dnsaddr
}
else
{
Set-PrivateProfileString "C:\Program Files\DHCPSrv\dhcpsrv.ini" GENERAL DNS_0 8.8.8.8
}

The second line is the one that may catch you out. It gets the DNS server information for the interface that is linked to the default IPv4 route. On my machine there are multiple entries returned by the get-netroute command, so I grab the first one from the array. Similarly, there are multiple DNS servers returned and I only want the first one of those, too. I should really expand the code and check what's returned, but this is only for my PC - edit as you need!

Just in case I get nothing back I have a failsafe which is to set the value to the Google public DNS server on 8.8.8.8.

Now I run that script first, then start my DHCP server and all my VMs get valid IP information and can talk on whatever network I am connected to, be it physical or wireless.


Azure Logic Apps-Service Bus connector not automatically triggering?

By default logic apps will be set to trigger every 60 minutes which, if you are not aware, may lead you to thinking that your logic app isn’t working at all!

As Logic Apps are preview there are some features that are not available through the designer yet, but you can do a lot through the Code view.

In this instance you can set the frequency to Second, Minute, Hour, Day, Week, Month or Year.  For a frequency of every minute it is required to be on a standard service plan or better. If your service plan doesn’t allow the frequency you will get an error as soon as you try and save the logic app.  Here’s what I set to have it run every minute.

. image

More information can be found at https://msdn.microsoft.com/en-us/library/azure/dn948511.aspx.

Build 2016 - Day 0

So Day 0 of Build. A day to settle in and relax and register for Build. Myself, my mother and sister found ourselves wandering through San Francisco. Found our way back to the Microsoft Store.....

We then headed off to our first interview with Sam Guckenheimer. I'll writing a separate blogpost talking about how that went so please go have a look.

It was a quick dash across San Francisco to make it to the Moscone centre for registration where I got my Build badge!! And adorned it with the iconic Black Marble.

Our final stop for the day was to a student panel close to the Moscone centre. Myself and four other MSP's from around the world were sat on this panel infront of Gartner, Forrester and a number of other large analytic firms and reporters. I was joined by two US Imagine cup finalists, and two other MSP's from across Europe. We were asked questions such as what we what we find most exciting thing in technology. Many answers were said such as the hardware or the reward or writing bug free software. I personally believe that the most exciting thing in technology is seeing what a difference it is making to the lives around us. As a small example in the UK the BBC are pushing an initiative where they want all children to be able to code. To do this they have launched a microcomputer called the Microbit. Every child aged 11 will receive one and there are lots of resources to help both students and teachers get the most out of it. Things like that I personally find exciting in computing and make me so happy to be in the field myself.  Other questions included what we thought of the gender divide in computing and did we find it was prevalent in our own places of work/ universities. On our panel of five it so happened that three of us were female but I would say that gives a skewed perspective. In my year out of close to 200 students less than ten are female and the other two MSP's there agreed with me that there is a similar spread for them also. Over the hour we were asked many questions and though at first it seemed like one of the most terrifying things that I had ever done I actually enjoyed it lots. Though the last question of what do you want to do after being a student scared most of us really. It was really interesting to also hear the thoughts and opinions of other like minded and similar aged people from around the world and what they thought. A huge thank you to Jennifer Ritzinger for being so lovely on the day as well :)

Due to hunger and the adrenalin wearing off we found myself heading back for food once more. This time The Cheesecake Factory. Where the food portions are as big as your head... Unfortunately due to the fact they don't take bookings we didn't finish eating until close to ten. Definite time for bed!!

 

Build 2016 - Getting there

Many would say starting a nearly 24 hour journey with a storm is far from ideal but I would say it added some mild fun to the whole affair. Sadly storms mean planes cannot take off which turned a one hour flight into a one hour forty wait and then an hour flight. Alas it meant plenty of time for snoozes.

Unfortunately our delay in Manchester meant that we had only half an hour to get off our plane, get through security in Paris and then run across the terminal to get to our gate. We luckily made it with seconds to spare and the gates were closed as soon as we walked through. As always with a long haul flight I found my options of what to do limited to either napping or watching whatever inflight entertainment that there was on offer. There was only one option......

There was only one choice...

 After a long 11 hour flight we landed in joyous sunny San Francisco. Moments after leaving the airport I found myself removing my jacket and donning layers due to the warm temperatures and bright sunshine. After arriving safely back at our hotel there was the obvious question of where to stop first. There was an unanimous decision to stop at the Microsoft store.

As ever all the latest tech was out from phones to surface books and I fawned over them all. Who doesn't like shiny new tech...

Sadly the phones were secured down 'sigh' maybe next time. Considering that for us it was getting close to 4pm(Midnight in the UK) it was decided that it would be best for an early meal then and then an early bedtime.

Luckily there are some amazing restaurants nearby including a Mexican. After eating what I can only describe as my body weight in guacamole it was time to call it a night and prepare for Build Day 0.

Azure Logic Apps–Parsing JSON message from service bus

What I want: When the logic app trigger receives a JSON formatted message from Azure Service Bus topic, I want to send a notification to the “email” field.  My sample message structure looks like this:

image

What happens: Because a message received on service bus doesn’t have a predefined format – it could be JSON, XML, or anything else – so Logic Apps doesn’t know the structure of the message.  So in the designer, it looks like:

image

Which is great, but it just dumps out the entire object, and not the email field that I need.

How to fix it: Fortunately the fix is pretty easy, basically you need

1) Select the Content output (above), you are going to edit this value.

2) Switch over to ‘Code view’ and manually type the expression (below).

If you haven't used it before, code view can be found in the toolbar:

image

Once you are in the code view, scroll down to the connector you are interested in. You will see the expression for the trigger body. This is the entire message received from the trigger, basically.

image

You need to modify this to parse the entire message using the ‘json’ function, then you can access it’s typed fields.

If you have ever used JSON.parse (or any object deserialization in pretty much any language for that matter) this concept should be familiar to you.  When I was done I ended up with:

image

I’ve broken the entire segment into two parts, a) parses the content and b) accesses the ‘email’ field of the parsed JSON object.

Hope this helps someone!

 

Update: if you are seeing an error when trying to parse see my new blog post Azure Logic Apps-The template language function 'json' parameter is not valid.

In place upgrade times from TFS 2013 to 2015

There is no easy way to work out how long a TFS in place upgrade will take, there are just too many factors to make any calculation reasonable

  • Start and end TFS version
  • Quality/Speed of hardware
  • Volume of source code
  • Volume of work items
  • Volume of work item attachments
  • The list goes on….

The best option I have found to a graph various upgrades I have done and try to make an estimate based in the shape of the curve. I did this for 2010 > 2013 upgrades, and now I think I have enough data from upgrades of sizable TFS instances to do the same for 2013 to 2015.

image

 

Note: I extracted this data from the TFS logs using the script in this blog post it is also in my git repo 

So as a rule of thumb, the upgrade process will pause around step 100 (the exact number varies depending on your starting 2013.x release), time this pause, and expect the upgrade to complete in about 10x this period.

It is not 100% accurate, but close enough so you know how long to go for a coffee/meal/pub or bed for the night

Announcing release of my vNext build tasks as extensions in the VSTS/TFS Marketplace

In the past I have posted about the vNext TFS build tasks I have made available via my GitHub repo. Over the past few weeks I have been making an effort to repackage these as extensions in the new VSTS/TFS Marketplace, thus making them easier to consume in VSTS or using the new extensions support in TFS 2015.2

This it is an ongoing effort, but I pleased to announce the release of the first set of extension.

  • Generate Release Notes – generates a markdown release notes file based on work items associated with a build
  • Pester Test Runner – allows Pester based tests to be run in a build
  • StyleCop Runner – allows a StyleCop analysis to be made of files in a build
  • Typemock TMockRunner – used TMockrunner to wrapper MSTest, allowing Typemock test to be run on a private build agent

To try to avoid people going down the wrong path I intend to go back through my older blog posts on these tasks to update them to point at new resources.

Hope you find these tasks useful. If you find any log any issues on Github

Unblocking a stuck Lab Manager Environment (the hard way)

This is a post so I don’t forget how I fixed access to one of our environments yesterday, and hopefully it will be useful to some of you.

We have a good many pretty complex environments deployed to our lab hyper-V servers, controlled by Lab manager. Operations such as starting, stopping or repairing those environments can take a long, long time, but this time we had one that was quite definitely stuck. The lab view showed the many servers in the lab with green progress bars about halfway across but after many hours we saw no progress. The trouble is, at this point you can’t issue any other commands to the environment from within the Lab Manager console – it’s impossible to cancel the operation and regain access to the environment.

Normally in these situations, stepping from Lab Manager to the SCVMM console can help. Stopping and restarting the VMs through SCVMM can often give lab manager the kick it needs to wake up. However, this time that had no effect. We then tried restarting the TFS servers to see if they’d got stuck, but that didn’t help either.

At this point we had no choice but to roll up our sleeves and look in the TFS database. You’d be surprised (or perhaps not) at how often we need to do that…

First of all we looked in the LabEnvironment table. That showed us our environment, and the State column contained a value of Repairing.

Next up, we looked in the LabOperation table. Searching for rows where the DataspaceId column value matched that of our environment in the LabEnvironment table showed a RepairVirtualEnvironment operation.

In the tbl_JobSchedule table we found an entry where the JobId column matched the JobGuid column from the LabOperation table. The interval on that was set to 15, from which we inferred that the repair job was being retried every fifteen minutes by the system. We found another entry for the same JobId in the tbl_JobDefinition table.

Starting to join the dots up, we finally looked in the LabObject database. Searching for all the rows with the same DataspaceId as earlier returned all the lab hosts, environments and machines that were associated with the Team Project containing the lab. In this table, our environment row had a PendingOperationId which matched that of the row in the LabOperation table we found earlier.

We took the decision to attempt to revive our stuck environment by removing the stuck job. That would mean carefully working through all the tables we’d explored and deleting the rows, hopefully in the correct order. As the first part of that, we decided to change the value of the State column in the LabEnvironment table to Started, hoping to avoid crashing TFS should it try to parse all the information about the repair job we were about to slowly remove.

Imagine our surprise, then, when having made that one change, TFS itself cleaned up the database, removed all the table entries referring to the repair environment job and we were immediately able to issue commands to the environment again!