Positively Impacting your Organisation with Collaborative Working

Collaboration has been and will continue to be one of the important business advantages that the Cloud can deliver to an organisation. Collaboration can be thought of as not just connecting people to one another and improving their day to day working practices, but also enabling and encouraging collaboration between people and data.

Black Marble can support your move to the cloud for collaboration services between people and data. We can help your organisation realise the full potential of people to people collaboration using services such as SharePoint and, in particular for this white paper, Microsoft Teams. Our approach will help you identify how Microsoft’s collaboration solutions can improve your ways of working, whilst helping you visualise your end goal.

Positively Impacting your Organisation with Collaborative Working.
Positively Impacting your Organisation with Collaborative Working.

Delivering an Enterprise Cloud Operating Model

There have been some major paradigm shifts in the history of computing with some of the most notable being marked, not only by changes in technology, but by changes in staffing that technology. When the computing standard for mainframe shifted to client/server, the staff model moved from computer operator to system administrator.

The same is true with a move to the cloud.

The cloud fundamentally changes how businesses procure and use technology resources. Traditionally, having had ownership and responsibility of all aspects of technology from infrastructure to software with the cloud, this allows businesses to provision and to consume resources only as needed. Moving to the cloud can bring increased business, agility, and significant costs benefits.

However, the journey to the cloud needs to be managed carefully at each stage; not just for delivery but for expectations and ROI. Even more significantly, the cloud opens up access to a range of on-demand cloud services, unavailable just 10 years previously. These include hyper-scaling, AI services and computing power; short-term consumption providing significant benefits.

All these services combined, provide business realisations that only the cloud can offer.

Transforming your business into a cloud-business is more than simply moving your systems and infrastructure into the cloud – your organisation needs a Cloud Operating Model (COM) to adopt a cloud-first mentality. It is important to guide your people away from traditional IT thinking, to ensure they realise business benefits and harness the true potential of the cloud, where adoption drives innovation. This white paper will cover how this can be achieved with the assistance of Black Marble.

For more information on Delivering an Enterprise Cloud Operating Model, get in touch for a copy of the white paper I put together with our CCO, Rik Hepworth.

Cover of Delivering an Enterprise Cloud Operating Model White Paper
Delivering an Enterprise Cloud Operating Model White Paper, 2nd Edition.

Moore’s Law – Deciphering Falsehood’s and Prophecies

Several times over the last few months I have encountered people who have misquoted/used Moore’s law to justify a solution for a problem they are avoiding or more often are failing to comprehend to any real level. At a base level it is used as a reason not to do anything at worst used to justify the reasons not change poor or degrading solutions.

After far too many of these encounters I thought it was about time I posted my view of and at the same time debunked some of the many falsehoods I have seen Moore’s law used to justify, but I also wanted to give a view of what the change over time of the Computing Industry actually means.


so what is the law:

Well first of all it is not a law it is in fact an observation and prediction, which in general terms has stayed the course for nearly 50 years.

Gordon E Moore predicted that in general terms the processing power of a computer will double every two years. Moore was initially talking about the number of transistors in a processor but later changed to talk in more general terms.

There is common trait of all of us is to read into a statement, what we want to hear or more accurately we hear what we want to hear when we don’t fundamentally understand what was said.

But the main problem is that Moore was talking about Raw processing power, but what we use as consumers of computing power is a managed level of computing offering a higher level of functionality well above and beyond the raw power, As time goes on, the computing systems offer more and more features and layers of tooling which make users, developers and IT pro’s life easier and more productive, but this comes at a general cost of raw power used by the operating system and the applications that run on top.


I think if you do look at Moore’s law as a more general concept, then look out our industry where a lot of very bright people are working hard improving computer technology and the outcome is likely to be a doubling of overall power every two to three years and that the manifestation could be more transistors, more cores, better threading, then i think you are closer to understanding what people generally mean by the term.


With the continual growth of the cloud and specifically Azure bringing great Platform as a Service, elastic cloud delivery the general concept of Moore will have some time to go.


so why do people get it so wrong:


It would be inconceivable that in a modern enterprise, applications would be built enterprise applications without these successive improvements and changes to the core of computing which we take for granted.

Microsoft .NET is a great example of this, it performs a large amount of “heavy lifting” with regards to the day to day coding chores such as memory management, interoperability etc., it provides a large and comprehensive library set which to be frank in the 80’s and 90’s would have cost a fortune to buy or develop or indeed weren’t available. .NET is free and people forget the sheer power it brings to a developers finger tips.

What this means in real terms is we can deliver new software with huge amounts of functionality in a relatively small timeframe, with a minor cost of a higher level of computing power.

But expecting that your poorly written, under performing application will be just fine in a few years is not the answer to anything apart from self delusion.


Ok but what does this mean in real terms?

In my opinion, Software systems increase in base value and functionality by an order of magnitude every 4.5 years. The real problem is that users expectations of software increase by at least double that.


So what to do?

Don’t bet on more power in the future, build for the now and allow for the future to increase your performance if it arrives. There are great tools to help performance and great guidance on architecture and design.



StyleCop 4.7 now with VS 11 support

The great guys on the StyleCop project have now released StyleCop which now has VS11 preview support.

It also gives support for new templates which unlike the visual studio supplied templates are StyleCop compliant and it also supports the Async CTP.

Get it here


Enterprise Library now with added Silverlight goodness

A series of updates are available for Enterprise Library and Unity and a few other bits of general interest

There is an update for the Enterprise Library 5.0 which is only required if you are using the Silverlight Integration Pack and need WCF RIA Services Integration or configuration tool support. get it here

However the most interesting development is a version of the Enterprise Library for Silverlight developers get it here it contains Caching Application Block, Exception Handling Application Block, Logging Application Block, Policy Injection Application Block, Validation Application Block, and Unity Application Block allowing you to develop best practice effectively.

Just in case you don’t have the Enterprise library you can get it here

Microsoft Unity 2.1 is a dependency injection container. this is a minor update on Unity get it here , I also recommend you visit the Patterns and Practices page for Unity here

There is also a port of Unity 2.1 for Silverlight here

if you need to learn how to use these great libraries there is of course a Hands On Labs (HOL) for Enterprise Library 5.0 here


July update

I looked back and realised I had missed a few updates to post ( I blame Microsoft for bringing out lots of great things all at once Winking smile)

Light switch Beta 2 is out get it here, the site now suggests that a July 26th launch date is on the cards

StyleCop 4.5 has now been released get it here , if you haven’t tried StyleCop, its main aim is to help deliver standards of code style and standards across an organisation, but it is in fact a perfect pair to FxCop (see the Windows 7.1 .net 4.0 SDK) as FxCop analyses binaries , StyleCop analyses source code.

if you haven’t already scoot over to Richards blog for updates from the ALM Rangers on some cool TFS work


SDL Verification Tool

Everybody knows I like a verification tool as I think they are a solid way to enable a solid basis for quality, the argument against is that they do not get every case and so the argument goes we should settle for 20% MK1 human eyeball standards. I am a firm believer in both and so I was made up when Microsoft released the BinScope Binary Analyzer which analyzes binaries to check that they are in compliance with Microsoft’s Security Development Lifecycle (SDL) requirements and recommendations.

get it here


Top Architects , Top Conference , Top Speakers

The 4th annual Architect Insight Conference has just been announced and this year I am happy that Black Marble is helping make it happen.

I will be presenting on Microsoft’s Vision of modelling using “M” , I think M has a huge future in architecture and while it is only available in it base level , it is important to understand the potential.

I look forward to seeing everyone down in London.


Microsoft Architect Conference
May 8th, Microsoft Offices Cardinal Place, London


More Bad Apples

Over the last week I have received many comments on my recent post about bad apples, it seems to have resonated with a lot of people in the industry ( many of whom already read the great coding horror blog ). The two common questions have been, i) how do you spot it? , ii) what do you do?

The first is easier than the last, in general people will go dark and that becomes noticeable in communication skills, lack of documentation and general mood of self-despair. the less common variant is the supreme antagonist where the person will constantly pick fights which ironically they loose as their arguments are borne out of frustration with themselves not reality.

The second ( what to do ) is the hard one, I would hope that in most cases a sit down with the individual will sort out most problems. In fact most problems between people occur due to lack of communication, obviously there are politics and ego’s to content with


Bad Apples

One of the interesting pieces of work we get involved in are rescue projects. Rescue projects can be thought of projects that aren’t delivering or won’t deliver either to timescales, feature requirements or quality.

In a rescue project there are many areas that normally need addressing: Project Management, Documentation, Process and Quality. The one common theme in rescue projects is people; when we are brought in to help on a project people start to worry about losing their jobs, but more often than not are un-accepting of the situation they are in.

The reasons for projects failing are numerous but people are the main cause, Jim McCarthy in his 1995 book Dynamics of Software Development (ISBN 1-55615-823-8) discussed flipping the bozo bit where people fundamentally just lose the plot and need to be refocused.

A recent post on the problems of negative focused staff (Rotten Apples) by Jeff Atwood (Coding Horror) has sparked some thoughts, rather than copy sections out I urge you to read the posts and I have added some of my experiences on the same matters.

Dealing with Bad Apples (read this post)

Looking at projects where we have seen individual issues, the ones that strike home are refusal to have code reviewed, increasing amounts of secrecy (keeping lists on paper not electronic, discussions with third parties outside of the project or company), consistent grumbling but supplication when confronted. But I think the most common sign is complaints about others to divert attention away from the real problem. I can’t say how much is conscious, and malicious or not, it must be dealt with.

The striking mark is people on the high horses who stand absolute in their correctness, my advice, shoot the horse and then deal with the problem.

The Bad Apple the Group Poison (read this post)

I have run though nearly all the projects that have had problems and this post contains the key –

The worst team member is the best predictor of how any team performs

and for worst, it is more attitude than technical ability.  We have seen projects with what should have been a dream team fail but this fits the pattern, not technology but attitude.

It is strange that the people who are the problem are normally the ones who should have the most potential, but have flipped the bozo bit and refuse help. It is rare that the people I encounter don’t have the ability (they may need training and advice) but they are missing the point and sometimes languishing in politics seems an easier ride, but they always fail. Only once do I think someone was out of their depth and in that case I feel they were making the best of a bad situation.

In any project, people do need to stand up if there is a problem, and fight their corner, rather than just sit and whinge.  They then need to work through getting it resolved in a short space of time, in a reasonable manner, and accept the outcome.

In summary whilst it is generally best to maintain all project staff, there is a point when management must make a hard decision for the good of the group rather than the individual.

I’m interested in others’ experiences on this topic.