But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Fun with WCF, SharePoint and Kerberos – well it looks like fun with hindsight

[Updated 10 Nov 2010: Also see [More] Fun with WCF, SharePoint and Kerberos]

I have been battling some WCF authentication problems for a while now; I have been migrating our internal support desk call tracking system so that it runs as webpart hosted inside Sharepoint 2010 and uses WCF to access the backend services all using AD authentication. This means both our staff and customers can use a single sign on for all SharePoint and support desk operations. This replaced our older architecture using forms authentication and an complex mix of WCF and ASMX webservices that have grown up over time; this call tracking system started as an Access DB with a VB6 front end well over 10 years ago!

As with most of our SharePoint development I try not work inside a SharePoint environment when developing, for this project this was easy as the webpart is hosted in SharePoint but makes no calls to any SharePoint artefacts. This meant I could host the webpart within a test .ASPX web page for my development without the need to mock out SharePoint. This I did, refactoring my old collection of web services to the new WCF AD secured based architecture.

So at the end of this refactoring I thought I had a working webpart, but when I deployed it to our SharePoint 2010 farm it did not work. If I checked my logs I saw I had WCF authentication errors. The webpart programmatically created WCF bindings, worked in my test harness, but failed when in production.

A bit of reading soon showed the problem lay in the Kerberos double hop issues, and this is where the fun began. In this post I have tried to detail the solution not all the dead ends I went down to get there. The problem is that for this type of issue there is one valid solution, and millions of incorrect ones, and the diagnostic options are few and far between.

So you may be asking what is the kerberos double hop issue? Well a look at my test setup shows the problem.

[It is worth at this point getting an understanding of Kerberos, The Teched session ‘Kerberos with Mark Minasi’ is good primer]

image

The problem with this test setup is that the browser and the webserver, that hosts the test webpage (and hence webpart), are on the same box and running under the same account. Hence have full access to the credentials and so can pass them onto the WCF host, so no double hop.

However when we look at the production SharePoint architecture

image

We see that we do have a double hope. The PC (browser) passes credentials to the SharePoint server. This needs to be able to pass them onto the WCF hosted services so it can use them to access data for the original client account (the one logged into the PC), but by default this is not allowed. This is a classic Kerberos double hop. The SharePoint server must be setup such that is allow to delegate the Kerberos tickets to the next host, and the WCF host must be setup to accept the Kerberos ticket.

Frankly we fiddled for ages trying to sort this in SharePoint, but was getting nowhere. The key step for me was to modify my test harness so I could get the same issues outside SharePoint. As with all technical problems the answer is usually to create a simpler model that can exhibit the same problem. The main features of this change being that I had to have three boxes and needed to be running the web pages inside a web server I could control the account it was running as i.e. not Visual Studio’s default Cassini development web server.

So I built this system

image

Using this model I could get the same errors inside and outside of the SharePoint. I could then build up to a solution step by step. It is worth noting that I found the best debugging option was to run DebugView on the middle Development PC hosting the IIS server. This showed all the logging information from my webpart, I saw no errors on the WCF host as the failure was at the WCF authentication level, well before my code was accessed.

Next I started from the WCF kerberos sample on Marbie’s blog. I modified the programmatic binding in the webpart to match this sample

var callServiceBinding = new WSHttpBinding();
callServiceBinding.Security.Mode = SecurityMode.Message;
callServiceBinding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
callServiceBinding.Security.Message.NegotiateServiceCredential = false;
callServiceBinding.Security.Message.EstablishSecurityContext = false;
 
callServiceBinding.MaxReceivedMessageSize = 2000000;
 
this.callServiceClient = new BlackMarble.Sabs.WcfWebParts.CallService.CallsServiceClient(
    callServiceBinding,
    new EndpointAddress(new Uri(“http://mywcfbox:8080/CallsService”)));
 
this.callServiceClient.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
 

I then created a new console application wrapper for my web service. This again used the programmatic binding from the sample.

static void Main(string[] args)
{
    // create the service host
    ServiceHost myServiceHost = new ServiceHost(typeof(CallsService));
 
    // create the binding
    var binding = new WSHttpBinding();
 
    binding.Security.Mode = SecurityMode.Message;
    binding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
 
    // disable credential negotiation and establishment of the security context
    binding.Security.Message.NegotiateServiceCredential = false;
    binding.Security.Message.EstablishSecurityContext = false;
 
    // Creata a URI for the endpoint address
    Uri httpUri = new Uri("http://mywcfbox:8080/CallsService");
 
    // Create the Endpoint Address with the SPN for the Identity
    EndpointAddress ea = new EndpointAddress(httpUri,
                      EndpointIdentity.CreateSpnIdentity("HOST/mywcfbox.blackmarble.co.uk:8080"));
 
    // Get the contract from the interface
    ContractDescription contract = ContractDescription.GetContract(typeof(ICallsService));
 
    // Create a new Service Endpoint
    ServiceEndpoint se = new ServiceEndpoint(contract, binding, ea);
 
    // Add the Service Endpoint to the service
    myServiceHost.Description.Endpoints.Add(se);
 
    // Open the service
    myServiceHost.Open();
    Console.WriteLine("Listening... " + myServiceHost.Description.Endpoints[0].ListenUri.ToString());
    Console.ReadLine();
 
    // Close the service
    myServiceHost.Close();
}

I then needed to run the console server application on the WCF host. I had made sure the the console server was using the same ports as I had been using in IIS. Next I needed to run the server as a service account. I copied this server application to the WCF server I had been running my services within IIS on, obviously I stopped the IIS hosted site first to free up the IP port for my end point.

As Marbie’s blog stated I needed run my server console application as a service account (Network Service or Local System), to do this I used the at command to schedule it starting, this is because you cannot login as either of these accounts and also cannot use runas as they have no passwords. So my start command was as below, where the time was a minute or two in the future.

at 15:50 cmd /c c:\tmp\WCFServer.exe

To check the server was running I used task manager and netstat –a to make sure something was listening on the expect account and port, in my case local service and 8080. To stop the service I also used task manager.

I next need to register the SPN of the WCF end point. This was done with the command

setspn -a HOST/mywcfbox.blackmarble.co.uk:8080 mywcfbox

Note that as the final parameter was mywcfbox (the server name). In effect I was saying that my service would run as a system service account (Network Service or Local System), which for me was fine. So what had this command done? It put an entry in the Active Directory to say that this host and this account are running an approved service.

Note: Do make sure you only declare a given SPN once, if you duplicate an SPN neither works, this is a by design security feature. You can check the SPN defined using

setspn –l mywcfbox

I then tried to run load  my test web page, but it still do not work. This was because the DevelopmentPC, hosting the web server, was not set to allow delegation. This is again set in the AD. To set It I:

  1. connected to the Domain Server
  2. selected ‘Manage users and computers in Active Directory’.
  3. browsed to the computer name (DevelopmentPC) in the ‘Computers’ tree
  4. right click to select ‘properties’
  5. selected the ‘Delegation’ tab.
  6. and set ‘Trust this computer for delegation to any service’.

I also made sure the the IIS server setting on the DevelopmentPC were set as follows, to make sure the credentials were captured and passed on.

image

Once all this was done it all leap into life. I could load and use my test web page from a browser on either the DevelopmentPC itself or the other PC.

The next step was to put the programmatically declared WCF bindings into the IIS web server’s web.config, as I still wanted to host my web service in IIS. This gave me web.config servicemodel section of

<system.serviceModel>
   <bindings>
     <wsHttpBinding>
       <binding name="SabsBinding">
         <security mode="Message">
            <message clientCredentialType="Windows" negotiateServiceCredential="false" establishSecurityContext="false" />
         </security>
       </binding>
     </wsHttpBinding>
   </bindings>
 
   <services>
     <service behaviorConfiguration="BlackMarble.Sabs.WcfService.CallsServiceBehavior" name="BlackMarble.Sabs.WcfService.CallsService">
       <endpoint address="" binding="wsHttpBinding" contract="BlackMarble.Sabs.WcfService.ICallsService" bindingConfiguration="SabsBinding">
       </endpoint>
       <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
     </service>
   </services>
 
   <behaviors>
     <serviceBehaviors>
       <behavior name="BlackMarble.Sabs.WcfService.CallsServiceBehavior">
         <serviceMetadata httpGetEnabled="true" />
         <serviceDebug includeExceptionDetailInFaults="true" />
         <serviceAuthorization impersonateCallerForAllOperations="true" />
       </behavior>
     </serviceBehaviors>
   </behaviors>
 </system.serviceModel>

I then stopped the EXE based server, made sure I had the current service code on my IIS hosted version and restarted IIS, so my WCF web service was running as network service under IIS7 and .NET4. It still worked, so I now had an end to end solution using Kerberos. I knew both my server and client had valid configurations and in the format I wanted.

Next I upgraded my Sharepoint solution that it included the revised webpart code and tested again, and guess what, it did not work. So it was time to think was was different between my test harness and Sharepoint?

The basic SharePoint logical stack is as follows

image

The key was the account which the webpart was running under. In my test box the IIS server was running as Network Server, hence it was correct to set in the AD that delegation was allowed for the computer DevelopmentPC. On our Sharepoint farm we had allowed similar delegation for SharepointServer1 and SharepointServer2 (hence Network Service on these servers). However our webpart was not running under a Network Service account, but under a domain named account. It was this account blackmarble\spapp that needed to be granted delegation rights in the AD.

Still this was not the end of it, all these changes need to be synchronised out to the various box, but after a repadmin on the domain controller an IISreset on both the SharePoint front end server it all started working.

So I have the solution  was after, I can start to shut off all the old system I was using and more importantly I have a simpler stable model for future development. But what have I learnt? Well Kerberos is not as mind bending as it first appears, but you do need a good basic understanding of what is going on. Also that there are great tools like Klist to help look at Kerberos tickets, but for problems like this the issue is more a complete lack of ticket. The only solution is to build up you system step by step. Trust me you will learn more doing this way, there is no quick fix, and you learn far more than failure rather than success.

PDC 2010 thoughts - the next morning

I sat in the office yesterday with a beer in my hand watching the PDC2010 keynote. I have to say I preferred this to the option of a flight, jet lag and a less than comfortable seat in a usually overly cooled conference hall. With the Silverlight streaming the experience was excellent, especially as we connected an Acer 1420P to our projector/audio via a single HDMI cable and it just worked.

So what do you lose by not flying out? Well the obvious is the ‘free’ Windows Phone 7 the attendees got; too many people IMHO get hooked up on the swag at conferences, you go for knowledge not toys. They also forget they (or their company) paid for item anyway in their conference fee. More seriously you miss out on the chats between the sessions, and as the conference is on campus the easier access to the Microsoft staff. Also the act of travelling to a conference isolates you from the day to day interruptions of the office, the online experience does not and you will have to stay up late to view sessions live due to timezones. The whole travelling experience still cannot be replaced by the online experience, not matter how good the streaming.

However, even though I don’t get the ‘conference corridor experience’ it does not mean I cannot check out sessions, it is great to see they are all available free and live, or immediately available recordings if I don’t want to stay up.

The keynote was pretty much as I had expected. There were new announcements but nothing that was ground breaking, but good vNext steps. I thought the best place to start for me was the session “Lessons learned from moving team foundation server to the cloud”, this was on TFS, and obvious area of interest for me, but more importantly no real world experience to move a complex application to Azure. This is something that is going to effect all of us if Microsoft’s bet on the cloud is correct. Seems, though there are many gottas, the process was not as bad as you would expect. For me the most interesting point was the port to Azure caused changes to the codebase that actually improved the original implementation either in manageability or performance. Also that many of the major stumbling blocks were business/charging models not technology. This is going to effect us all as we move to service platforms like Azure or even internally host equivalents like AppFabic

So one session watched, what to watch next?

Common confusion I have seen with Visual Studio 2010 Lab Management

With any new product there can be some confusion over the exact range and scope of features, this is just as true for VS2010 Lab Management as any other. In fact given the number of moving parts (infrastructure you need in place to get it running) it can be more confusing than average. In this post I will cover the questions I have seen most often.

What does ‘Network Isolation’ really mean?

The biggest confusion I have seen is that Lab Management allows you to run a number of copies of a given test environment, each instance of an environment is ‘network isolated’ from the others. This means that each instance of the environment can have server VMs named the same without errors being generated. WHAT IS DOES NOT MEAN is that each of these environments are fully isolated from your corporate or test LAN. Think about it, how could this work? I am sad to say it there is still no shipment date for Microsoft Magic Pixie Net (MMPN), until this is available then we will still need a logical connection to any virtual machine under test else we cannot control/monitor it.

So what does ‘Network Isolation’ actually mean? Well it basically means Lab Manager will add a second network card to each VM in your environment (with the exception of domain controllers, I will come back to that). These secondary connections are the way you usually manage the VMs in the environment, so you end up with something like the following

image

Lab Manager creates the 192.168.23.x virtual LAN which all the VMs in the environment connect to. If you want to change the IP address range this is set in the TFS administration console, but I suspect needing to change this will be rare.

If the PCs in your environment are in a workgroup the is no more to do, but if you have a domain within your environment (i.e. you included a test domain controller VM in your environment, as shown above) you also need to tell the Lab Management environment which server is the domain controller. THIS IS VERY IMPORTANT. This is done in the Visual Studio 2010 Test Manager application where you setup the environment.

When all this is done and the environment is deployed, a second LAN card is added to all VMs in the environment (with the exception of the domain controller you told it about, if present). These LAN cards are connected to the corporate LAN and an IP address is provided using your corporate LAN DHCP server and assign a name in the form something like LAB[Guid].corpdomain.com (you can alter this domain name to something like LAB[Guid].test.corpdomain.com if you want in TFS administration console). This second LAN is a special connection in that the VMs NETBIOS names are not broadcast over it onto the corporate LAN, thus allowing multiple copies of the ‘network isolated’ to be run. In each case each VM will have a unique name on the corporate LAN, but the original names within the test (192.168.x.x) environment.

Other than blocking NETBIOS, the ‘special connection’ is in no other way restricted. So any of the test VMs can use their own connection to the corporate LAN to access any corporate (or internet resources) such as patch update servers. The only requirement will be to login to the corporate domain if authentication is required, remember on the test environment you will be logged into the test domain or local workgroup.

I mentioned that the test domain controller is not connected to the corporate LAN. This is to make sure corporate users don’t try to authenticate against it by mistake and to stop different copies of the test domain controller trying to sync.

All clear? So ‘network isolated’ does not mean fully isolated, but the ability to have multiple copies of the same environment running at the same time with the magic done behind the scenes auto-magically by Lab Management. Maybe not the best piece of feature naming in the world!

So how does a tester actually connect to the test VMs from their PC?

Well obviously they don’t use a magic MMPN connection, there has to be a valid logical connection. There are actually two possible answers here; I suspect the most common will be via remote desktop straight to the guest test VMs, this will be via the LAB[Guid].corpdomain.com name. You might be thinking how do I know these IDs, well you can get them from the Test Manager application looking at VMs system info in any running environment. Because you can look them up in this way, a tester can either use the Windows RDP application itself or more probably just connect to the VMs from within Test Manager where it will use RDP behind the scenes.

The other option is to use what is called a host connection. This is when Test Manager connects to the test VMs via the Hyper-V host. For this to work the tester needs suitable Hyper-V rights and the correct tools on their local PC, not just the Test Manager. This could also be achieved using the Hyper-V manager or SCVMM console. Host mode is the way you would use to connect to a test domain controller that has no direct connection to a corporate LAN.

The choice of connection and tool will depend on what the tester is trying to do. I would expect Test Manager to be the tool of choice in most cases.

Do I need Network Isolation – is there another option?

This all depends on what you want to do, there are good description of the possible architectures in Lab Management documentation. If you don’t think ‘network isolation’ as described above is right for you the only other option that can provide similar environment separation is to not run them ‘network isolate’ but to provided the environment with a single explicit connection to the corporate LAN via a firewall such as TMG allow there connection.

This goes without saying is more complex than using the standard ‘network isolated’ model built into Lab Management, so make sure it is really worth the effort before starting down this route.

What agents do I need to install?

There are a number of agents involved in Lab Management, these allow network isolation management, deployment and testing. The ones you need depend on what you are trying to do. If you want all the feature, not unsurprisingly you need them all. If this is what you want to do then use the VMPrep tool, it makes life easier. If you don’t want it all (and it might be easier to just install all of them as standard) you can choose.

If you want to gather test data you need the test agent, and you want to deploy code you need the lab workflow agent. The less obvious one is that for ‘network isolation’ you need the Lab Agent installed, it is though this agent that network isolation LAN is configured.

Any other limitations I might have missed?

The most obvious is that many companies will use failover clustering and a SAN to make a resilient Hyper-V cluster. Unfortunately technology his is not currently supported by Lab Management. This is easy to miss as it is only referred to once in the documentation to my knowledge in an FAQ section.

The effect of this is to not allow shared SAN storage between any Hyper-V hosts or more importantly between the VMM Library and the Hyper-V hosts. This means that all deployment of environments has to be over the LAN, the faster SAN to SAN operations cannot be used as these need clustering.

I suppose there is also the limitation of no clustering that you cannot hot migrate environments around between Hyper-V hosts, but I don’t see this as much of an issue, these are meant to be lab test environments, not live production high resilience VMs.

This is a good reason to make sure that you separate you production Hyper-V hosts from your test ones,. Make the production servers a failover cluster and the test one just a hosts group. Let Lab Manager work out which server in the host group (assuming there is more than one) to place the environment on.

 

 

So I hope that helps a bit. I am sure I will find more common question, I will post about them as they emerge.

First look at Postsharp AOP framework for .NET

At the Software Craftsmanship 2010 conference I met Gael Fraiteur of Sharpcrafters, he had given a talk on Aspect Oriented Programming AOP.Since the conference I have had a chance to look at his Postsharp AOP product for .NET.

I decided to do a quick spike project for a tender I have been working on, the requirement is to add a security model to an existing .NET assembly. Usually this would have entailed adding some security logic at the start of each public method to implement the security model. Using AOP I hoped I would be able to get the same effect by adding an attribute to the classes/methods, hopefully making the changes easier to read and quicker to develop.

So I have the following business logic I wish to added security too. All I did was add the [Security] attribute to the business logic method

public class BusinessLogic
{
    IDataProvider data;
 
    public BusinessLogic(IDataProvider data)
    {
        this.data = data;
    }
 
    [Security]
    public DataRecord GetItem(int customerId)
    {
        Debug.WriteLine("BusinessLogic.GetItem");
        return this.data.GetItemFromDB(customerId);
    }
}

So what is in the AOP attribute? Basically I use the AOP framework to intercept the method call, and before the method is invoked I make a call to a factory method to get an instance of the security provider and check if I have the rights to run the method.

[Serializable]
 public class SecurityAttribute :MethodInterceptionAspect
 {
     public override void OnInvoke(MethodInterceptionArgs args)
     {
         Debug.WriteLine("SecurityAttribute.OnInvoke");
 
         // this assumes we know the type of arguement and that we can 
         if (MembershipProviderFactory.GetProvider().CanCurrentUserViewThisItem((int)args.Arguments[0]) == true)
         {
             Debug.WriteLine("SecurityAttribute.OnInvoke: We have rights to view");
             base.OnInvoke(args);
         }
         else
         {
             Debug.WriteLine("SecurityAttribute.OnInvoke: We dont have rights to view");
         }
     }
 }

As it was a spike project I did not bother to write the security provider (or the DB provider for that matter). I used Typemock Isolator to fake it all, so my tests were as shown below. I found this way of working much quicker for my purposes.

/// <summary>
  /// test for both the success and failure paths of the attribute
  /// </summary>
  [TestClass]
  public class Tests
  {
      [Isolated]
      [TestMethod]
      public void When_the_membership_provider_gives_access_the_data_is_returned()
      {
          // arrange
 
          // create a fake objects
          var fakeIMembershipProvider = Isolate.Fake.Instance<IMembershipProvider>();
          var fakeISqlProvider = Isolate.Fake.Instance<ISqlProvider>();
 
          // create real objects
          var fakeData = new DataRecord();
          var bl = new BusinessLogic(fakeISqlProvider);
 
          // Set that when we call the factory method we get the fake membership system
          Isolate.WhenCalled(() => MembershipProviderFactory.GetProvider()).WillReturn(fakeIMembershipProvider);
                      // Set when we call the DB layer we get the fake object
          Isolate.WhenCalled(() => fakeISqlProvider.GetItemFromDB(0)).WillReturn(fakeData);
          // Set that we are allowed to see the item
          Isolate.WhenCalled(() => fakeIMembershipProvider.CanCurrentUserViewThisItem(0)).WillReturn(true);
 
          // act
          var actual = bl.GetItem(1);
 
          // assert
          Assert.AreEqual(fakeData, actual);
          Isolate.Verify.WasCalledWithExactArguments(() => fakeISqlProvider.GetItemFromDB(1));
      }
 
      [Isolated]
      [TestMethod]
      public void When_the_membership_provider_does_not_give_access_the_data_is_returned()
      {
          // arrange
 
          // create a fake objects
          var fakeIMembershipProvider = Isolate.Fake.Instance<IMembershipProvider>();
          var fakeISqlProvider = Isolate.Fake.Instance<ISqlProvider>();
 
          // create real objects
          var fakeData = new DataRecord();
          var bl = new BusinessLogic(fakeISqlProvider);
 
          // Set that when we call the factory method we get the fake membership system
          Isolate.WhenCalled(() => MembershipProviderFactory.GetProvider()).WillReturn(fakeIMembershipProvider);
          // Set when we call the DB layer we get the fake object
          Isolate.WhenCalled(() => fakeISqlProvider.GetItemFromDB(0)).WillReturn(fakeData);
          // Set that we are not allowed to see the item
          Isolate.WhenCalled(() => fakeIMembershipProvider.CanCurrentUserViewThisItem(0)).WillReturn(false);
 
          // act
          var actual = bl.GetItem(1);
 
          // assert
          Assert.AreEqual(null, actual);
          Isolate.Verify.WasNotCalled(() => fakeISqlProvider.GetItemFromDB(1));
      }
 
  }

This all work beautifully and I have to say this was nice and straight forward to code. The code looks clean and using Reflector the generated code is OK tool.

My only worries are

  1. That of performance, but after looking at the code I can’t see that the AOP framework generated code is any great deal less efficient that me adding security methods calls in all the business method. Using Postsharp would certainly require much less repetitive coding. In my spike the security factory strikes me as the bottleneck, but this is my problem, not the frameworks, and can be addressed with a better design pattern to make sure it is not created on every method call.
  2. I can see complexity appearing depending on handling the parameters being passed between the attribute and method being invoked. In my spike I need to know order of the parameters so I could pass the correct one to my security methods, however again I don’t see this as being a major stumbling block, the framework could provide something I am unaware of or I just need to write few forms of the security aspect constructor.

So will I be using Postsharp? I suppose immediately it depends if I win this tender, but I have to say I like what I saw from this first usage.

I am speaking at NEBytes in November on Mocking with Typemock Isolator

On the 17th of November I will be speaking in Newcastle at the NEBytes user group on the subject ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’.

NEBytes meetings have an interesting format of a developer and an IT Pro talk at each meeting. The IT Pro session in November is to be given by another member of Black Marble staff, Rik Hepworth, and is on Sharepoint I think.

Hope to see you there.

Should I buy a Kindle?

I have always read a lot of novels, and I like to have a book with me for those unexpected moments when I get a chance to read. Of late this has meant I use the Microsoft Reader on my phone. It is not too bad an experience, using Project Gutenberg I can get a book (fiddle a bit in Word) and export to the Reader format. However I would like a slicker experience and be able to read new releases, so the Kindle seems just the job.

As a device it seems perfect, about the size and weight of a paperback, excellent battery life (as power is only used to turn/display the page, not to view pages), excellent in natural light and now the price has dropped to the point that if you did lose it you are pissed off, but not bankrupt. Oh and dropping in the bath, though it might ruin the device will not electrocute you!

My problem is the price of new books, take William Gibson’s Zero History as an example. On Amazon this is £12.29 in hardback but £9.99 for the Kindle edition. So from this we assume the production costs, shipping warehousing etc for the physical copy total £2.30, seems a bit low to me! How is the £9.99 justified, there will be the writer’s royalty, the file production costs and the marketing and other publishing overheads but £9.99 seems steep, especially give the royalty rate that I know friends who are writers gets for their novels. Someone is making a tidy profit, and it is not the writer.

If we look at one of Gibson’s older books, Spook Country, now in Paperpack for £2.99 we see the Kindle price is £2.84. So it seems the Kindle price is set at (very roughly) 10% below the lowest physical edition cost.

So I am being asked to buy a eBook at almost the same cost as I can get a paper copy, when the publisher/supplier chain do not have to make the physical copy or ship it. I get the convenience that I can carry around 3500 books at a time, but I can only read one! Also I cannot lend a book to a friend, thus I admit reducing the potential royalties of a writer, but also removing any viral marketing opportunities.

So should I buy a Kindle? At this price for the eBooks I think not. I will stick to buying my new books on paper and keep a selection of out of copyright classics on my phone. I will wait until the publishing industry reviews it sales model for these editions, maybe increasing the writers royalties to reflect that it is their efforts that are being purchased not examples of the book binders art!.

Do you work with C++? Typemock Isolator++ webinar

I don’t work with C++, but if you do you might be interested in Typemock’s free webinar this Thursday (21st October) on Isolator++. It will cover:

  • The basic API design principles of Isolator++ (from AAA to recursive fakes, loose fakes, sticky behaviour etc..)
  • What can Isolator++ do that others can’t?
  • Code examples using Google Test framework and Isolator++, to test nasty untestable C++ code
  • And as it is being given by Roy Osherove, possibly a short song, suitable for the occasion, performed live.

Also as an added incentive Typemock, in honor of the first release of Isolator++, Typemock are giving away a free Isolator++ license to all webinar attendees .

To attend the webinar, you can register here.

All our futures behind us?

I had a strangely thought provoking weekend, I took my son to do the tour of Concorde at Manchester Airport, and whilst in the area popped into Jodrell Bank to look at the Radio Telescope and the arboretum. Two great technological achievement, well worth a visit, but I felt both seemed to be in our past. I remember Concorde, I remember Apollo (just) and I remember sitting in a room at school to watch the first Shuttle launch, but where is the equivalent today? I started to feel that this ‘thrusting to the future’ style of project no longer exists; there seem to be few children saying ‘I want to be an engineer’ or ‘an astronaut’. I fear they are too often now saying ‘I just want to be famous’.

image

But then I thought a bit more and I think these projects are still there; we have had the LHC switched on last year and just last week BBC News covered the break through of the Gotthard Rail Tunnel. Big science/technology is still a news story, but I have to say more usually not in the positive sense, too many stories are presented in the ‘science gone mad’ category. We (or should I have said the media) have lost the awe for big science and replaced it with fear or at least a mistrust.

Maybe I am just looking at the past with rose tinted spectacles, Jodrell Bank was over budget about 10x and people complained ‘why send men to the moon when people are starving on earth’, so maybe the coverage was the same. The current mainstream tone of reporting could just be a factor of living in a less deferential age. For me there is no question it is good to question the value of science, but this has to be done from an informed position, you have to least start to understand the question to give an reasonable opinion (or even ask a reasonable question in the first place).

What I worry is that this move, this lack of awe and excitement in science, will drive children away from wanted to be involved in science and technology. At least we are seeing a return to accessible science on the BBC (worth every penny of the license fee) in Bang Goes the Theory, the World of Invention and the new archive of The Great Egg Race (proper 1970’s mad scientists, I doubt you would get a 30 minute programme today with people fiddling with bits of string and rubber bands whist wearing wing collar nylon shirts, think of the health and safety static risk alone!).

So I guess my initial fear is unfounded, there is the sense of wonder out there, maybe we just have to make more of an effort go to find it.