Moore’s Law – Deciphering Falsehood’s and Prophecies

Several times over the last few months I have encountered people who have misquoted/used Moore's law to justify a solution for a problem they are avoiding or more often are failing to comprehend to any real level. At a base level it is used as a reason not to do anything at worst used to justify the reasons not change poor or degrading solutions.

After far too many of these encounters I thought it was about time I posted my view of and at the same time debunked some of the many falsehoods I have seen Moore's law used to justify, but I also wanted to give a view of what the change over time of the Computing Industry actually means.

so what is the law:

Well first of all it is not a law it is in fact an observation and prediction, which in general terms has stayed the course for nearly 50 years.

Gordon E Moore predicted that in general terms the processing power of a computer will double every two years. Moore was initially talking about the number of transistors in a processor but later changed to talk in more general terms.

There is common trait of all of us is to read into a statement, what we want to hear or more accurately we hear what we want to hear when we don't fundamentally understand what was said.

But the main problem is that Moore was talking about Raw processing power, but what we use as consumers of computing power is a managed level of computing offering a higher level of functionality well above and beyond the raw power, As time goes on, the computing systems offer more and more features and layers of tooling which make users, developers and IT pro’s life easier and more productive, but this comes at a general cost of raw power used by the operating system and the applications that run on top.

I think if you do look at Moore's law as a more general concept, then look out our industry where a lot of very bright people are working hard improving computer technology and the outcome is likely to be a doubling of overall power every two to three years and that the manifestation could be more transistors, more cores, better threading, then i think you are closer to understanding what people generally mean by the term.

With the continual growth of the cloud and specifically Azure bringing great Platform as a Service, elastic cloud delivery the general concept of Moore will have some time to go.

so why do people get it so wrong:

It would be inconceivable that in a modern enterprise, applications would be built enterprise applications without these successive improvements and changes to the core of computing which we take for granted.

Microsoft .NET is a great example of this, it performs a large amount of “heavy lifting” with regards to the day to day coding chores such as memory management, interoperability etc., it provides a large and comprehensive library set which to be frank in the 80’s and 90’s would have cost a fortune to buy or develop or indeed weren't available. .NET is free and people forget the sheer power it brings to a developers finger tips.

What this means in real terms is we can deliver new software with huge amounts of functionality in a relatively small timeframe, with a minor cost of a higher level of computing power.

But expecting that your poorly written, under performing application will be just fine in a few years is not the answer to anything apart from self delusion.

Ok but what does this mean in real terms?

In my opinion, Software systems increase in base value and functionality by an order of magnitude every 4.5 years. The real problem is that users expectations of software increase by at least double that.

So what to do?

Don’t bet on more power in the future, build for the now and allow for the future to increase your performance if it arrives. There are great tools to help performance and great guidance on architecture and design.

b