BM-Bloggers

The blogs of Black Marble staff

Windows Azure and SignalR with Gadgeteer

I’ve been playing with Gadgeteer (http://www.netmf.com/gadgeteer/) for a while now and I am a big fan of the simple way we can build embedded hardware applications with high functionality. We have a proof of concept device that includes a Colour touch screen, RFID reader and an Ethernet connections. This device is capable of connecting to a web api REST service which we have hosted in Windows Azure and we can use this service to retrieve data from our service depending upon the RFID code that is read. This works well but there are times when we would like to notify the device when something has changed. SignalR seems to be the right technology for this as it removes the need to write polling code in your application.

Gadgeteer uses the .Net Micro framework which is a cut down .Net framework and doesn’t support the ASP.NET SignalR libraries. As we can use web api using the micro framework using the WebRequest classes,  I wondered what was involved to get SignalR working on my Gadgeteer device.

The first problem was to work out the protocol used by SignalR and after a short while trawling the web for details of the protocol I gave up and got my old friend fiddler out to see what was really happening.

After creating a SignalR service I connected my working example to the signalR hub running on my local IIS..

The first thing that pleased me was that the protocol looked fairly simple. It starts with a negotiate which is used to return a token which is needed for the actual connection.

GET /signalr/negotiate?_=1369908593886

Which returns some JSON:

{"Url":"/signalr","ConnectionToken":"xyxljdMWO9CZbAfoGRLxNu54GLHm7YBaSe5Ctv6RseIJpQPRJIquHQKF4heV4B_C2PbVab7OA2_8KA-AoowOEeWCqKljKr4pNSxuyxI0tLIZXqTFpeO7OrZJ4KSx12a30","ConnectionId":"9dbc33c2-0d5e-458f-9ca6-68e3f8ff423e","KeepAliveTimeout":20.0,"DisconnectTimeout":30.0,"TryWebSockets":true,"WebSocketServerUrl":null,"ProtocolVersion":"1.2"}

I used this JSON to pull out the connection id and connection token. This was the first tricky part with the .Net Micro framework. There is not the same support for JSON serialisation you get with the full framework plus the string functions are limited as well. For this I used basic string functions using Substring and IndexOf as follows:

int index = negJson.IndexOf("\""+token+"\":\"");
if (index != -1)
{
    // Extracts the exact JSON value for then name represented by token
    int startindex = index + token.Length + 4;
    int endindex = negJson.IndexOf("\"", startindex);
    if (endindex != -1)
    {
        int length = endindex - startindex;
        stringToExtract = negJson.Substring(startindex, length);
    }
}

With the correct token received Fiddler led me to the actual connection of signalR:

GET /signalr/connect?transport=webSockets&connectionToken=yourtoken&connectionData=%5B%7B%22name%22%3A%22chathub%22%7D%5D&tid=2 HTTP/1.1

Looking at this I could determine that I needed to pass in the token I retrieved from negotiate, the transport type and the name of the hub I want to connect to. After a bit of investigating I used the transport of longPolling.

Now as I think I understood the protocol, I tried to implement it in SignalR. The first issue that arose was what to send with the negotiate call. I figured that this was some sort of id of the client that is trying to connect so I decided to use the current tick count. This seemed to work and I guess that as long as my devices don’t connect at exactly the same time then Signal R would work. I’ve had no problems so far with this.

Upon connecting to the hub I needed to create a separate thread to handle signalR so that the main device wouldn't stop running whilst the connection to the SignalR hub was waiting for a response. Once a response is received the response returns with a block of JSON data appropriate to the SignalR message being received. This needs to be decoded and passed onto the application. You then need to reconnect back to the SignalR hub. The period between receiving data and then reconnecting back to the hub needs to be small. Whilst the message is being processed it cannot receive any more message and may miss some data. I retrieve the response stream and then pass the processing of the stream to a separate thread so that I can reconnect to the hub as fast as possible.

This is not a full implementation of SignalR on the .Net Micro-framework but it is the implementation of a simple client and can be used fairly successfully on the Gadgeteer device. I still need to do a little more work to try to speed up the connections as it is possible to miss some data.

The SignalR hub is hosted on a Windows Azure website along side the web api service which allows both web, Windows 8 and Gadgeteer applications to work side by side.

Gadgeteer has opened up another avenue for development and helps us to provide more variety of devices in a solution

My session on TFS at the ‘Building Applications for the Future’

Thanks to everyone who attended my session on ‘TFS for Developers’ at the Grey Matter’s ‘Building Applications for the Future’ event today. As you will have noticed my session was basically slide free, so not much to share there.

As I said at the end of my session to find out more have a look at

Also a couple of people asked by about TFS and Eclipse, which I only mentioned briefly at the end. For more on Team Explorer Everywhere look at the video I did last year on that very subject

Video on Nuget for C++ on Channel 9

I have been out to a number of sites recently where there are C++ developers. We often get talking about package management and general best practices for shared libraries. The common refrain is ‘I wish we had something like Nuget for C++’.

Well it was released in Nuget 2.5 and there is a video on Channel9 all about it.

Webinar on PreEmptive Analytics tools on the 28th of May

A key requirement for any DevOps strategy is the reporting on how your solution is behaving in the wild. PreEmptive Analytics™ for Team Foundation Server (TFS) can provide a great insight in this area, and there is a good chance you are already licensed for it as part of MSDN.

So why not have a look on the UK MSDN site for more details the free Microsoft hosted event.

MSDN Webinar Improve Software Quality, User Experience and Developer Productivity with Real Time Analytics
Tuesday, May 28 2013: 4:00 – 5:00 pm (UK Time)

Also why not sign up for Black Marble’s webinar event in June on DevOps process and tools in the Microsoft space.

Tfs Build – bug with Queued Builds, MaxConcurrentBuilds and Tags

On a recent TFS consultancy job, I was asked to monitor how long some builds spent waiting in the Build queue before Starting.

My plan was to use the TFS API  to query all builds with a status of ‘Queued’ and monitor the wait times.

I wrote the code and everything seemed to work fine.  However, after capturing a number of wait times and comparing them to the overall build times I noticed that the times did not match together.

In fact a build that was estimated to complete in 2 hours; took more than  6 hours and did not spend any time ‘Queued’

How?

The Build Controller distributes builds across multiple agents  and will start one build per build agent. (given you haven't changed the default MaxConcurrentBuilds setting ).  i.e If you have 3 build agents and you start 3 builds then the controller will set 3 builds into ‘InProgress.’  If you start a 4th build the this build will be ‘Queued’

This works fine given that any build agent can run any build definition

Unfortunately it does not take into account ‘Tags’ that may force certain builds onto specific agents.

Given the same conditions of 3 build agents:-
If you tag  a build agent  so only certain builds can use it and then start 3 builds that should only run on this tagged build agent. –> well you would expect that only one build would be set ‘InProgress’ and the other 2 builds would remain ‘Queued’ until the build agent finished the 1st build.

However the actual behaviour is that all 3 builds change to ‘InProgress’ at the same time; one per the MaxConcurrentBuilds setting on the build controller);  but only the first build is actually doing anything. The second two builds are stuck waiting to be allocated an agent .

image

Side Affects

You look at your dashboard and see a list of builds ‘In Progress’ that are actually blocked waiting for a build agent.

image

On the above screen-shot, only 213 is actually running on Build Agent 1. (214 and 215 are blocked waiting for agent 1 to become available)

Worse than that is 217; that can run on any build agent; is blocked in a ‘queued’ state when there are 2 idle build agents that could be running this build. However, It cannot start because the MaxConcurrentBuilds value of 3 has been reach.

Conclusion

Be very careful with the use of Tags. In future I will try and avoid tags when it could introduce the above bottleneck.

Additionally when attempting to use the TFS Api  to capture metrics on wait times then you cannot rely on Queued build only.  Instead I’ll query all builds assigned to a controller; and then filter out the list of builds that have been assigned a build agent –> This will give me the accurate list of builds pending.

Getting Wix 3.6 MajorUpgrade working

Why is everything so complex to get going with Wix, then so easy in the end when you get the syntax correct?

If you want to allow your MSI installer to upgrade a previous version then there are some things you have to have correct if you don’t want the ‘Another version of this product is already installed’ dialog appearing.

  • The Product Id should be set to * so that a new Guid is generated each time the product is rebuild
  • The Product UpgradeCode should be set to a fix Guid for all releases
  • The Product Version should increment on of the first three numbers, by default the final build number is ignored for update checking
  • The Package block should not have an Id set – this will allow it to be auto generated
  • You need to add the MajorUpgrade block to you installer

So you end up with

Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:netfx="http://schemas.microsoft.com/wix/NetFxExtension" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension" xmlns:iis="http://schemas.microsoft.com/wix/IIsExtension">
  <Product Id="*" Name="My App v!(bind.FileVersion.MyExe)" Language="1033" Version="!(bind.FileVersion.MyExe)" Manufacturer="My Company" UpgradeCode="6842fffa-603c-40e9-bedd-91f6990c43ed">
    <Package InstallerVersion="405" Compressed="yes" InstallScope="perMachine" InstallPrivileges="elevated"  />

    <MajorUpgrade DowngradeErrorMessage="A later version of [ProductName] is already installed. Setup will now exit." />

……

So simpler than pre Wix 3.5, but still places to trip up

Upgrading DotNetNuke from V5 to V7

I recently needed to upgrade a DNN V5 site  to V7 (yes I know I had neglected it, but I was forced to consider a change due to an ISP change). Now this is a documented process, but I had a few problems. There are some subtleties the release notes miss out. This is what I found I had to do to test the process on a replica web site …

Setup the Site

  • I restored a backup of the site SQL DB onto a SQL 2008R2 instance running on Windows 2008R2.
  • On the same box I created a new IIS 7 web site and copied in a backup of the site structure and setup the virtual application required for my DNN configuration.
  • I made sure the AppPool associated with the site and application was set to .NET 2.0 in classic mode.

image

  • I fix the connection strings to the restored DB in the web.config (remember there are two DB entries, one in ConnectionString and one in AppSettings).
  • At this point I thought it was a good idea to test the DNN 5 site

The Upgrade

  • I copied over the contents of the DDN V7 upgrade package
  • I changed the AppPool from .NET 2.0 Classic mode to .NET 4.0 in Integrated pipeline mode
  • I tried to load the website – at this point I got an ASP.NET 500 Internal error

An aside – If I used a Windows 8 PC (using IIS 7.5) all I got was the 500 internal error message, no more details. I am sure you can reconfigure IIS 7.5 to give more detailed messages. However, i chose to Windows 2008R2 and IIS7 which gave me a nice set of 500.19 web.config errors

  • The issue was I needed to edit the web.config to remove the duplicate entries it found, they are all in the form. Just remove the offending line, save the file and refresh the site.

image

<section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />

  • I then got the unhelpful DotNetNuke Error. Turns out this down to the wrong version of Telerik DLLs. They are not shipped in the DDN 7 upgrade package,  I just copied the bin folder from the DNN 7 Install package which contains the right versions

image

  • The DNN upgrade wizard should now load. I entered my login details and let it run, it took about a minute.

image

The site might still not load (showing the error below), this is because the DB stores the sites based on the full domain name, so trying to load using something like http://localhost/dnn may not work (unless you configured it as the address). I had to edit the hosts file on my PC so my full domain name e.g http://www.mydomain.com/dns resolved to 127.0.0.1. The alternative is to (if you can connect) on the hosts > site management > edit the site > site aliases and enable ‘auto add site alias’. if this is done you can connect with any address

image

503 Problems

Now that should be the whole story, but I still had problems. I kept seeing 503 errors

image

On checking I found the AppPool kept stopping. The event log showing

Application: w3wp.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an internal error in the .NET Runtime at IP 000007FEF8E21550 (000007FEF8E20000) with exit code 80131506.

and

Application pool 'DDN' is being automatically disabled due to a series of failures in the process(es) serving that application pool.

I tried a clean install of DNN 7 to it own DB (on the same server) using the same AppPool. This all worked fine. So I had to assume the problem lay with either

  1. My web.config
  2. My upgraded DB
  3. Some assembly in my installation

Reading around hinted that AppPools stopping could be due to having mix .NET 2/3.5 and 4.0 assemblies. So I was favouring option 1 or 3

In the end I chose to use the web.config from the DNN V7 installation package. I just copied this over the upgraded one and edited the connection strings. I also had to replace the machine key entry.

<machineKey validationKey="<and string>" decryptionKey="<and string>" decryption="3DES" validation="SHA1" />

This has to swapped as this is used to decrypt data such as passwords from the DB, if you don’t do this you can’t login in.

Once this new web.config was changed the site loaded without any errors. I never tracked down the actual line in the web.config that caused the problem.

DNN Log Errors

I repeated this upgrade process a few times before I got it right. On one test I saw errors in my ./DNN/Portals/_Default/Logs in the form

2013-05-13 21:46:50,505 [TyphoonTFS][Thread:14][ERROR] DotNetNuke.Services.Exceptions.Exceptions - System.Data.SqlClient.SqlException (0x80131904): The INSERT statement conflicted with the FOREIGN KEY constraint "FK_ScheduleHistory_Schedule". The conflict occurred in database "dnn", table "dbo.Schedule", column 'ScheduleID'.

I fixed this by deleting the contents of the a dbo.ScheduleHistory table. However, I think this is a red herring as when I got the rest of the process OK this error was not shown.

Content Update

Finally I could upgrade the skin being used by the site to make it look like a DNN 7 based site.

  • Most importantly for me was to get the new admin look by changed the way the DNN admin menus are shown. This is done by changing the Host Settings > Other Settings > Control Panel to CONTROLBAR. This gets you the new menu banner model at the top of the page (this took me ages to find!)

image

  • I updated my extension modules (Host > extensions). This page shows which modules have updates. Click on the green update link to download the package. Then the use the ‘Install Extension Wizard’ button at the top of the page to install them.

image

  • Finally I started to changed from a V5 skin to one of the DNN V7 ones, I was using a customised version of the old default of MinimalExtropy, I swapped to the new  one based on the new default Gravity

So this is still a work in progress as with any CMS solution. Now I need to repeat the process by moving my installation from the old ISP to the new one.

Why do all my TFS labels appear to be associated with the same changeset?

If you look at the labels tab in the source control history in Visual Studio 2012 you could be confused by the changeset numbers. How can all the labels added by my different builds, done over many days, be associated with the same changeset?

image

If you look at the same view in VS 2010 the problem is not so obvious, but that is basically due to the column not being shown.

image

The answer is that the value shown in the first screen shot is for the root element associated with the label. If you drill into the label you can see all the labelled folders and files with their changeset values when the label was created

image

So it is just that the initial screen is confusing drilling in makes it all clearer.

Problem with CollectionView.CurrentChanged event not being fired in a WPF application

Had an interesting issue on one of our WPF applications that is using MVVM Lite.

image

This application is a front end to upload and download folder structures to TFS. On my development PC all was working fine i.e. when we upload a new folder structure to the TFS backend the various combo’s on the download tab are also updated. However, on another test system they were not updated.

After a bit of tracing we could see in both cases the RefreshData method was being called OK, and the CollectionViews  recreated and bound without errors.

private void RefreshData()
        {
            this.dataService.GetData(
                (item, error) =>
                {
                    if (error != null)
                    {
                        logger.Error("MainViewModel: Cannot find dataservice");
                        return;
                    }

                    this.ServerUrl = item.ServerUrl.ToString();
                    this.TeamProjects = new CollectionView(item.TeamProjects);
                    this.Projects = new CollectionView(item.Projects);
                });

            this.TeamProjects.CurrentChanged += new EventHandler(this.TeamProjects_CurrentChanged);

            this.TeamProjects.MoveCurrentToFirst();
          }

So what was the problem? To me it was not obvious.

It turned out it was that on my development system the TeamProject CollectionView contained 2 items, but on the test system only 1.

This means that even though we had recreated the CollectionView and rebound that data and events the calling of MoveCurrentToFirst (or any other move to for that matter) had no effect as there was no place to move to in a collection of one item. Hence the changed event never got called and this in turn stopped the calling of the methods that repopulated the other combos on the download tab.

The solution was to add the following line at the end of the method, and all was OK

            this.TeamProjects.Refresh();

Tech Update for Public Sector

Right now I am putting the finishing touches to my deck for an event Black Marble are running at Cardinal Place next week. As many of you will know, for the past ten years we have run the annual Tech Update covering moves and changes across the entire Microsoft spectrum of products. Until now that has only taken place in Leeds but for the first time we are taking that show on the road.

On Monday 20th May Robert and I will present our Tech Update for Public Sector. Everything you need to know about the Microsoft family for executive planning, as current as we can be. I always enjoy presenting at Cardinal Place and I’m looking forward to it.

As luck would have it, the following day is the Microsoft Management Summit recap, also at Cardinal Place. I’m attending rather than presenting for a change, so if you’re on the cloud track say hi!