An old friend and colleague reminded me that I have been remiss in not posting recently, for which I apologise. In my defense, I’ve spent most of the last year with early-stage technology companies. And of course, I’ve posted previously on what corporates can learn from them.
Big data, big hype
However, I read recently an interesting article on the 2018 Gartner Hype Cycle for emerging technologies which set me thinking. The article spotted that the hype around big data, machine learning, deep learning etc has reached ‘peak’ hype. The technologies no longer progress in the normal way through the hype cycle. Instead, they appear at ‘peak of inflated expectations’ and then just as often disappear again in a puff of smoke. Which broadly translates into “nobody is getting much value” from them.
Now I should start by saying that there are a clear set of use cases for big data analytics where value has been realised. The problem is that we find these applications most commonly in places where we genuinely do have big data. Most often though, what we have is small data. However, there’s an enormous amount of insight and value that companies can still extract from small data.
We do this, right?
Some years ago, I led a piece of work to identify cost reduction opportunities in a large corporation. This wasn’t the first such exercise, and so I wanted to include an analysis of how well previous work on cost reduction had delivered. Whoever said ‘those who fail to learn from history are doomed to repeat it’ was right. I was surprised that the suggestion was rejected by the stakeholders. Not surprisingly, we only achieved limited success. Could we have done better? Certainly.
The case for data-led decision making
We often aspire to be more digital, for which read more like Google, Amazon, and Facebook. And yet we ignore one of their most copy-able characteristics, Data-Led Decision Making. Jim Barksdale once famously said “If we have data, let’s look at data. If all we have are opinions, let’s go with mine”.
I’m talking again in a couple of weeks’ time on the topic of “Why change fails”. The talk is based on some data analysis we undertook in 2010 based on our own successful and unsuccessful technology change programmes. From a large set of possible factors and some straightforward statistical analysis, we isolated a handful of key factors that most strongly correlated with success or failure. Was it rocket science, No. Was it even data science, No. It was a cross-tab classification exercise done using a spreadsheet. But it allowed us to change team composition, delivery processes and stage-gate exit criteria that significantly (statistically speaking) improved our outcomes in delivering large-scale technology programmes successfully … Small data, big insight.