But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Skydrive pushes me over my broadband usage

I got back from a trip away to find an unexpected bill for broadband  through the letterbox. I have paid about the same each quarter for broadband for a good while now, I don’t see much variation as I rarely use my home phone, then again who does?

This bill was nearly double, why?

I think it was mostly due to setting up Skydrive to mirror my family photos and video as a backup, though this can’t explain it all, but then again my son as found Roblox. In each month I went over my usage it was costing me £5 a 5Gb block. It adds up fast.

On calling BT I found I could upgrade my package to a larger allowance and it worked out less than £1 more. The most irritating thing was they had been emailing me about my usage on my BT provided email address, an address I have never used.

So the top tip is make sure your usage notifications go to an address you actually get .

Thoughts on my Channel9 post

After hearing my TEE video on Channel9 mentioned on Radio TFS I thought I should watch it through, I had only found time to do a quick look previously. This is all part of my on-going self review process, a form of self torture.

It seems the issues I mentioned last time are still there, I still have too many err’s. The thing that stood out the most was I looked like a very shifty newsreader. My movement behind the table and losing eye contact with the camera were too noticeable to me.

This said I am happy with how it came out. It was great working with a professional crew and you can see how good they can make the video look with good lights, camera and  editing.

On the whole I am very happy with it, just need to ‘love the camera’ bit more.

Thoughts on the new Skydrive

I have swapped to the new version of Microsoft Skydrive, replacing my old Mesh setup. It is a nice slick experience, allowing easy viewing of files on Skydrive from Windows and WP7. However, I do have couple of issues

  1. I used Mesh to sync photos from my Window 7 Media Center up to cloud storage as a backup, don’t want to loose all the family photos due to a disk failure. This was simple with Mesh, just set up a sync. This is not so easy with the new Skydrive, which appears only as a folder in your user area. The only solution I can spot is to copy my photos into this folder e.g. xcopy d:\photos c:\users\richard\skydrive\photos. Once the copy is done this will be synced up to the cloud. With mesh if I added a file to my PC it sync’d without me doing anything, now I need to remember the xcopy (or whatever sync copy tool I am using), or have the copy being run on a regular basis via a timer.
  2. Letting Skydrive start automatically on a laptop Windows PC is dangerous. I was on site today using my Mifi and in about 10 minutes used a whole days credit. So I would recommend changing your tool tray setting to make sure you can see the Skydrive icon all the time, so you have a chance see when it is syncing and can stop it when on a connection that cost you money.


So any comments, or better ways to do the sync?

More thoughts on Typemock Isolator, Microsoft Fakes and Sharepoint

I posted yesterday on using Typemock and Microsoft Fakes with SharePoint. After a bit more thought I realised the key thing in using Typemock I found easier was the construction of my SPListItem dataset. Typemock allowed me to fake SPListItems and put them in a generic List<SPListItem> then just make this the return value for the Item collection using the magic .WillReturnCollectionValuesOf() method that converts my List to the required collection type. With the Microsoft Fakes I had think about a delegate that constructed my test data at runtime. This is not a problem, just a different way of working.

A side effect of using the Typemock .WillReturnCollectionValuesOf() method is that if I check the number of SPListItems in the return SPListColection I have a real collection so I can use the collection’s own .Count method, I don’t have fake it out. With the Microsoft Fakes as there is no collection returned so I must Fake its return value.

This is a trend common across Typemock Isolator, it does much  of the work for you. Microsoft Fakes, like Moles, required you to do the work. In Moles this was addressed by the use of behaviour packs to get you started with standard items you need in SharePoint.

I would say again that there may be other ways of using the Microsoft Fakes library, so there maybe ways to address these initial comments of mine, I am keen to see if this is the case

Now that VS11 has a fake library do I still need Typemock Isolator to fake out SharePoint?

Updated 5 May 2012 Also see my follow up post, this corrects some of the information of faking SharePoint 

I have done posts in the past about how you can use Typemock Isolator to fake out SharePoint to speed design and testing. The  reason you need special tooling, beyond standard mocking frameworks like Rhino or MOQ, is that SharePoint has many sealed private classes with no public constructors. So in the past you only had two options: Typemock Isolator and Moles from Microsoft research.

With the release of the Visual Studio 11 beta we now have a means to fake out ‘non mockable classes’ (shim) classes that is shipped in the box. This tooling I understand has it roots in Moles, but is all new. So with the advent of fakes in VS11 you have to ask ‘do I still need Typemock isolator?”

To answer this question I have tried to perform the same basic mocking exercise as I did in my previous posts. Create a fake SharePoint list and make sure I can access the faked out content in test asserts.

In Typemock Isolator

To perform the test in Isolator, assuming Isolated is installed on your PC, is add a reference to Typemock.dll and Typemock.ArrangeActAssert.dll and use the following code

        public void FakeSharePointWithIsolator()

            // Arrange // build the dataset var fakeItemList = new List<SPListItem>();
            for (int i = 0; i < 3; i++)
                var fakeItem = Isolate.Fake.Instance<SPListItem>();
                Isolate.WhenCalled(() => fakeItem.Title).WillReturn(String.Format("The Title {0}", i));
                Isolate.WhenCalled(() => fakeItem["Email"]).WillReturn(String.Format("email{0}@fake.url", i));


            // fake the SPWeb and attach the data var fakeWeb = Isolate.Fake.Instance<SPWeb>();
            Isolate.WhenCalled(() => fakeWeb.Url).WillReturn("http://fake.url");
            Isolate.WhenCalled(() => fakeWeb.Lists["fakelistname"].Items).WillReturnCollectionValuesOf(fakeItemList);

            // act // not actually doing an operation // assert Assert.AreEqual("http://fake.url", fakeWeb.Url);
            Assert.AreEqual(3, fakeWeb.Lists["fakelistname"].Items.Count); 
            Assert.AreEqual("The Title 0", fakeWeb.Lists["fakelistname"].Items[0].Title);
            Assert.AreEqual("email0@fake.url", fakeWeb.Lists["fakelistname"].Items[0]["Email"]);
            Assert.AreEqual("The Title 1", fakeWeb.Lists["fakelistname"].Items[1].Title);
            Assert.AreEqual("email1@fake.url", fakeWeb.Lists["fakelistname"].Items[1]["Email"]);

Using Microsoft Faking

Adding the Fake

The process to added a fake in VS11 is to right click on an assembly reference (in our case Microsoft.SharePoint) and select the ‘add fake assembly’ option.


You should see a Fake reference created and an entry in the fakes folder


Gotcha Warning ‘Can’t generate the fake reference’ – When I tried to generate a fake for the Microsoft.SharePoint assembly within a classlibrary project that had a reference to only the Microsoft.Sharepoint assembly (and the default assemblies references added for any classlibrary project) the entry is made in the Fakes folder but no .Fakes assembly is created. After a delay (30 seconds?)  you see an error message in the Visual Studio output window. This tells you a reference cannot be resolved, if you delete the entry in the Fakes folder, add the missing reference listed in the output windows and repeat the process you get the same problem but another assembly is named as missing, add this and repeat this process. Eventually the .Fakes assembly is created.

In the case of this SharePoint sample I had to manually add Microsoft.SharePoint.Dsp, Microsoft.SharePoint.Library, Microsoft.SharePoint.Search, System.Web, System.Web.ApplicationServices. Remember these entries are ONLY required to allow the fake creation/registration, they are not needed for your assembly to work in production or for Typemock.  [5 May 2012 See my follow up post

You should now have a generated assembly that you can use to create your fakes shims

Writing the fakes logic

The logic to create the fake behaviour is as follows. Now there might be easier ways to do this, but this does work and is reasonably readable

 [TestMethod] public void FakeSharePointWithShims() 
using (ShimsContext.Create()) // required to tidy up the shim system
// arrange
var fakeWebShim = new Microsoft.SharePoint.Fakes.ShimSPWeb()
UrlGet = () =>
ListsGet = () =>
new Microsoft.SharePoint.Fakes.ShimSPListCollection()
ItemGetString = (listname) =>
new Microsoft.SharePoint.Fakes.ShimSPList()
ItemsGet = () =>
new Microsoft.SharePoint.Fakes.ShimSPListItemCollection()
// we have to fake the count, as we not returning a list SPListCollection
CountGet = () => 3,
ItemGetInt32 = (index) =>
new Microsoft.SharePoint.Fakes.ShimSPListItem()
TitleGet = () =>
string.Format("The Title {0}", index),
ItemGetString = (fieldname) =>
string.Format("email{0}@fake.url", index)
// note we don't check the field name

// act
// not actually doing an operation
// assert
var fakeWeb = fakeWebShim.Instance;
Assert.AreEqual("http://fake.url", fakeWeb.Url);
Assert.AreEqual(3, fakeWeb.Lists["fakelistname"].Items.Count);
Assert.AreEqual("The Title 0", fakeWeb.Lists["fakelistname"].Items[0].Title);
Assert.AreEqual("email0@fake.url", fakeWeb.Lists["fakelistname"].Items[0]["Email"]);
Assert.AreEqual("The Title 1", fakeWeb.Lists["fakelistname"].Items[1].Title);
Assert.AreEqual("email1@fake.url", fakeWeb.Lists["fakelistname"].Items[1]["Email"]); }



Which which do you find the most readable?

I guess it is down to familiarity really. You can see I end up using the same test asserts, so the logic I am testing is the same.

I do think the Typemock remains the easier to use. There is the gotcha I found with having to add extra references to allow the fakes to be create in the VS11 faking system, and just the simple fact of having to manually create the fakes in Visual Studio, as opposed to it just being handled behind the scenes by Typemock. There is also the issue of having to refer to the ShimSPWeb class and using the .Instance property as opposed to just using the SPWeb in Typemock.

The down side of Typemock is the cost and the need to have it installed on any development and build machines, neither an issue for the Microsoft fake system, the tests just being standard .NET code that can be wrappered in any unit testing fame work (and remember VS11’s Unit Test Explorer and TFS11 build allow you to use any testing framework not just MStest.

So which tool am I going to use? I think for now I will be staying with Typemock, but the VS11 faking system is well worth keeping an eye on.

[Updates 24 March 12 - More comments added in a second post]

Changed my phone to a Nokia

I swapped to a Nokia Lumia 800 yesterday from my LG E900, all very quick an easy after my experience last month.

My first impressions

  1. the on/off/volume buttons were better placed for a left hander on the LG, but I expect I will get used to that.
  2. the poor reception in my house was not the LGs fault – just a bad reception area
  3. the Nokia does seem faster

DevOps are testers best placed to fill this role?

DevOps seems to be the new buzz role in the industry at present. People who can bridge the gap between the worlds of development and IT pros. Given my career history this could be a description of  the path I took. I have done both, and now sit in the middle covering ALM consultancy where I work with both roles. You can’t avoid a bit of development and a bit of IT pro work when installing and configuring TFS with some automated build and deployment.

The growth of DevOps is an interesting move because of late I have seen the gap between IT Pros and developers grow. Many developers seem to have less and less understanding of operational issues as times go on. I fear this is a due to the greater levels of abstractions that new development tools cause. This is only going to get worse was we move into the cloud, why does a developer need to care about Ops issues, AppFabric does that for them – doesn’t it?

In my view this is dangerous, we all need at least a working knowledge of what underpins the technology we use. Maybe this should hint at good subjects for informal in-house training, why not get your developers to give intro training to the IT pros and vice versa? Or encourage people to listen to podcasts on the other roles subjects such as Dot Net Rocks (a dev podcast) and Run As Radio (an IT pro podcast). It was always a nice feature of the TechEd conference that it had a dev and IT pro track, so if the fancy took you could hear about technology from the view of the the other role.

However, these are longer term solutions, it is all well and good promoting these but in the short term who is best placed to bridge this gap now?

I think the answer could be testers, I wrote a post a while ago that it was great to be a tester as you got to work with a wide range of technologies, isn’t this just an extension of this role. DevOps needs a working understanding of development and operations, as well as a good knowledge of deployment and build technologies. All aspects of the tester role, assuming your organisation considers a tester not to be a person who just ticks boxes on a check list, but a software development engineer working in test.

This is not to say that DevOps and testers are the same, just that there is some commonality and so you may have more skills in house than you thought you did. DevOps is not new, someone was doing the work already, they just did not historically give it that name (or probably any name)

My experiences moving to BlogEngine.NET


I have recently moved this blog server from using Community Server 2007 (CS2007) to BlogEngine.NET.

We started blogging in 2004 using .Text, moving through the free early versions of Community Server then purchased Community Server Small Business edition in 2007. This cost a few hundred pounds. We recently decided that we had to bring this service up to date, if for no other reason, to patch the underling ASP.NET system up to date. We checked how much it would cost to bring Community Server to the current version and were shocked by the cost, many thousands of dollars. Telligent, the developers, have moved to only servicing enterprise customers, they have no small business offering. So we needed to find a new platform.

Being a SharePoint house, we consider SharePoint as the blog host. However, we have always had the policy to have systems that have external content creation i.e. you can post a comment, not be on our primary business servers. As we did not want to install a dedicated SharePoint farm for just the blogs we decided to use another platform, remembering we needed on that could support multiple blogs that we could aggregate to provide a BM-Bloggers shared service.

We looked at what appears to be the market leader Wordpress, but to host this we needed a MySql Db, which we did not want to install, we don’t need another DB technology on our LAN to support. So we settled on BlogEngine.NET, the open source .NET4 blogging platform that can use many different storage technologies, we chose SQL2008 to use our existing SQL server investment.


So we did a default install of BlogEngine.NET. We did it manually as I knew we were going to use a custom build of the code, but we could have used the Web Platform Installer

We customised a blog as a template and the used this to create all the child blogs we needed. If we were not bring over old content we would have been finished here. It really would have been quick and simple.

Content Migration

To migrate our data we used BlogML. This allowed us to export CS2007 content as XML files which we then imported to BlogEngine.NET.

BlogEngine.NET provides support for BlogML our the box, but we had install a plug-in for CS2007

This was all fairly straight forward, we exported each blog and imported it to the new platform, but as you would expect we did find a few issues

Fixing Image Path (Do this prior to import)

The image within blog posts are hard coded as URLs in the export file. If you copied over the image files (that are stored on the blog platform) from the old platform to the new server, on matching urls, then there should be no problems.

However, I decided I wanted images in the location they are meant to be in i.e the [blog]\files folder using BlogEngine.NETs image.axd file to  load them. It was easiest to fix these in the BlogML XML file prior to importing it. The basic edited was to change




I did these edits with simple find and replace in a text editor, but you could use regular expressions.

Remember also the images need to be copied from the old server (…\blogs\rfennell\image_file.png)  to a the new server ( …\App_Data\blogs\rfennell\files\image_file.png)

We also had posts written with older versions of LiveWriter. This placed images in a folder structure (e.g.  ..\blogs\rfennell\livewriter\postsname\image_file.png). We also need to move these to the new platform and fix the paths appropriately.

Post Ownership

All the imported posts were shown to have an owner ID not the authors name e.g. 2103 as opposed to Richard.The simplest fix for this was a SQL update after import e.g.

update [BlogEngine].[dbo].[be_Posts] set [Author] = 'Richard' where [Author]='2103'

The name set should match the name of a user account created on the blog

Comment Ownership

Due to the issues over spam we had forced all users to register on CS2007 to post a comment. These external accounts were not pulled over in the export. However, BlogEngine.NET did not seem that bothered by this.

However no icons for these users was show.


These icons should be rendered using the websnapr.com as a image of the commenter's homepage, but this was failing. This it turned our due to their recent a API changes, you now need to pass a key. As an immediate solution to this I just removed the code that calls websnapr so the default noavatar.jpg image is shown. I intend to look at this when the next release of BlogEngine.NET appears as I am sure this will have a solution to the websnapr API change.

There was also a problem many of the comment author hyper links they all seemed to be http://. To fix the worst of this I ran a SQL query.

update be_PostComment set author = 'Anon' where Author = 'http://'

I am sure I could have done a better job with a bit more SQL, but our blog has few comments so I felt I could get away with this basic fix


CS2007 displays tag clouds that are based on categories. BlogEngine.Net does the more obvious and uses categories as categories and tags as tags.

To allow the BlogEngine.NET  to show tag clouds the following SQL can be used to duplicate categories to tags

insert into be_PostTag (BlogID,PostID, Tag)
select be_PostCategory.BlogID, postID, categoryname from be_PostCategory, be_Categories where be_PostCategory.CategoryID = be_Categories.CategoryID and be_PostCategory.BlogID ='[a guid from be_blogs table]'

A workaround for what could not be exported

Were we had a major problem was the posts that were made to the original .Text site that was upgraded to Community Server, these were posts from 2004 to 2007.

Unlike all the other blogs these posts would not export via the CS BlogML exporter. We just got a zero byte XML file. I suspect the issue was some flag/property was missing on these posts so the CS2007 internal API was having problems, throwing an internal exception and stopping.

To get around this I had to use the BlogML SDK and some raw SQL queries into CS2007 database. There was a good bit of trial and error here, but by looking at the source of BlogML CS2007 exporter and swapping API calls for my best guess at the SQL I got the posts and comments. It was a bit rough, but am I really that worried over 5 year old plus posts?

Blog Relationships

Parent/Child Relationship

When a child blog is created an existing blog is copied as a template. This includes all its page, posts and users. For this reason it is a really good idea to keep a ‘clean’ template that as as many of the setting correct as possible. So when a new child blog is create you basically only have to create new user accounts and set its name/template

Remember no user accounts are shared between blogs, so the admin on the parent is not the admin on the child, each blog has its own users.

Content Aggregation

A major problem for Black Marble was the lack of aggregation of child blogs. At present BlogEngine.NET allows child blogs, but no built in way to roll up the content to the parent. This is a feature that I understand the developers plan to add in a future release.

To get around this problem, I looked to see if it was easy to modify the FillPosts methods to return all post irrespective of the blog. This would, I my opinion,  have taken too much hacking/editing due to the reliance on the current context to refer to the current blog, so I decided on a more simplistic fix

  1. I create a custom template for the parent site that removes all the page/post lists and menu options
  2. Replaced the link to the existing syndication.axd with a hand crafted syndication.ashx
  3. Added the Rssdotnet.com open source project to the solution and used this to aggregate the Rss feeds of each child blog in the syndication.ashx page

This solution will be reviewed on each new release of BlogEngine.Net in case it is no longer required.


So how was the process? not as bad as I expected, frankly other than our pre-2007 content it all moved without any major issues.

It is a good feeling to now be on platform we can modify as we need, but has the backing of an active community.

We need to teach children computer science

The Today programme on BBC Radio 4 this morning had a section on children not being taught computer science. so says a variety of employers who cannot find the staff they need. It seems entry to Computer Sciences degrees has been dropping over the years and the current ICT course at school are to blame.

I was at school in the first generation to get access to computers. I did Computer Sciences O level (1982 I think, it is a while ago) and we learnt about flowcharts, CPU, memory and programmed in BASIC on a teletype and sent 5 hole punch paper tape off to a local Polytechnic for processing and a week or so later got back a printout saying error on line 10 (and my staff today complain about their slow PCs!).  During the course we did at least move onto a Tandy TRS80 so it did get a bit more immediate.

My sister is 4 years younger than me, and one of the first group to do GSCE, she also did Computer Science, but in those few short years the course was already moving towards using a computer as opposed to programming/how it works, though she did have access to a BBC Micro I don’t remember her ever writing code beyond making a Turtle robot draw a box. This trend to consumption instead of creation seems to have continued onwards. My son who is 9 now uses a computer for many lessons, but only it seems to look things up.

On listening to the radio article, I agreed it is good to teach the ‘how it works’ for computing, just as it is reasonable to know roughly how my car works, though I have no real intention of trying to fix it. A basic understanding of any tools allow you to use it to its best advantage. However, the biggest advantage of Computer Science in schools to me, which they failed to mention, is it teaches logical thinking and fault finding in an unforgiving world. When coding if you miss that semi colon off nothing is going to work.

This sort of skill that is vital in most modern jobs. At Black Marble we have been involved in this area for years, such as being the corporate sponsor/advisors of  the 2007 UK Winners in Microsoft Imaging Cup with ‘My First Programming Language’. This aimed to teach junior school children how to solve logical problems and program. It also actually addressed one of the issue mentioned in the radio article, the lack of teachers with programming skills (you can’t rely in a keen maths teacher who had built their own computers as my school did on the 80s). It incorporated some AI technology to do the first diagnostic to try to fix the code the student had written before interrupting the teacher, think a complier that can fix the code and explain why the student’s code has failed to the student. This is important when the teacher has 20+ children to help so can only give each one a few minutes.

During this project, and other research work we have done, it showed that many people can benefit from knowing just a little on how to program, not just school children. A good example is the office worker who if they can write an Excel macro can save themselves 20 minutes a day. This adds up so they can save themselves around 60 hours a year and probably reduces there number of errors in calculation too.

I agreed with a recent point on a Herding Code podcast that many people of my age got into computing because they were typing in games from the front of magazines in the 80s With a bit of application you could see how your favourite arcade games worked. This is less true of todays popular games, an XBox title is more like a 3D movie when all you can create is a holiday snap, it is just too big to comprehend. However, the games on phones and PDAs are more accessible, a kid in the garage can see how they work and create their own games and applications with relative ease, it is back like being the late 80s or early 90s.

So I do hope there is a move back to a real Computer Science in schools. It is not as if there are no jobs in this sector.