Carillion, complexity and the dawn of the robot CEO?

I don’t follow all the ins and outs (mainly outs, I guess) of the outsourcing trend, but for me, as for many people, the collapse of Carillion gave pause for thought.

I was particularly taken by Matthew Vincent’s excellent FT article on why Carillion went into liquidation rather than  administration , partly because I actually understood most of it, but mainly because it struck a chord with my thoughts about outsourcing, complexity, automation and the ‘tyranny of the deal’ which I’ve shared in other posts on this blog.

These thoughts have led me to conclude, perhaps slightly controversially, that we can avoid another Carillion by replacing CEOs with robots.

Vincent argues that Carillion had to be put into liquidation rather than administration because its only assets were its contracts, and that these contracts could not be sold on by an administrator because the margins were too low, and the layers of contracts were too complex to unpick.

The reference to contract complexity particularly piqued my interest as I saw parallels with my own experience as an IT manager.

During my 33 year career in IT, I have been involved in several revolutions in IT products and services: from tightly integrated programs and databases to loosely coupled services; from corporate mainframes and hardwired office PCs to Cloud services accessible from any device, and from monolithic in-house IT departments to outsourcing, service towers and service integration and management.

All of this has helped to improve IT capability, flexibility, efficiency and responsiveness, but at a cost of increasing complexity. There are more and more moving parts which have to talk nicely to each other to keep the IT working, and fewer and fewer of these moving parts are under the direct control of the IT function.

Thirty years ago, corporate IT heads had frequently risen through the technical ranks; now, increasingly, they are change agents and deal makers drawn from other parts of the business.

I believe this complexity has grown to the point where traditional approaches to things like testing and fallback provision are no longer adequate.  When high profile IT failures occur these days, the understandable indignant and incredulous cries of ‘Didn’t they test it?’, ‘Why didn’t they have a backup?’ and ‘Why are they still using flaky old legacy systems?’ strike me as a bit naïve.

Actually they did test it extensively and they have several levels of backup, but they’ve replaced the legacy system which worked for 30+ years with a new one that has thousands of separate components which no single test harness can fully test, and a backup system which needs to take over juggling those thousands of components in a split second without dropping a single one.

I’m not suggesting for a moment that the answer is to revert to the good old simple days of green screen terminals and vast IT departments of people with beards and Hawkwind T-shirts. We would lose immeasurably more than we would gain by going back to basics.

We have the answer, and it lies in the technology itself. Virtual, digital twins of complex systems are already being built and deployed which allow changes and failure scenarios to be tested in an exact replica of the real system.  I believe these are critical to our ability to maintain the service levels that businesses and customers demand as our systems grow ever more complex.

Of course, this takes investment of time and money, as the digital twin has to be developed in concert with the live system. And this investment can be hard to come by. Businesses and customers want ‘fast, good, cheap’, but the hoary Jim Jarmusch quote ‘fast, good, cheap – pick two’ becomes more and more relevant as you add complexity.

Approaches like Lean and Agile can deliver all three through simplification, but there comes a point where complexity is inescapable, even if it’s the complexity of many simple things working together. We have to accept this cost of complexity and harness the capability of IT to meet it in the most efficient way possible.

But how does this relate to Carillion and my Machiavellian plan to replace all CEOs with robots?

I believe the Carillion episode illustrates, amongst many other things, a business trend towards complexity through fragmentation and layering which mirrors the trend in IT.

Subcontractors have existed ever since God outsourced the apple thing to Satan, but increasingly products and services are delivered by assembling and co-ordinating outputs from layered contracts with different levels of subcontractor, to the extent that large and successful businesses now exist which work purely as aggregators.

In such a world, the deal maker tends to rule – the person who blinks last at the negotiation table or who can offer the deal that seems too good to be true. It’s a world where personalities and politics can trump due diligence and data, where the fast and the cheap tend to win out over the good.

It may not be comfortable, but it has to be that way because every deal is built on incomplete information and therefore an incomplete understanding of how it will work.

Personalities, relationships and politics have to fill in the gaps, which increases risk, and can lead ultimately to the Carillions of this world, with their devastating impact on people’s work and lives.

What if we could greatly reduce this gap? What if we had a means of collating and analysing all the data about each deal in the context of thousands of other deals, about the parties involved, the scope of the deal and what it will really take to deliver it in a sustainable way.

That is where I believe technology comes in – if we can use Artificial Intelligence (AI), machine learning, robotics and digital twins to deliver robust IT systems, why can’t we use it to do the same for business deals?

We may balk at the sheer amount and complexity of data involved in handing over business deals to a robot, yet you only need to Google ‘AI in medical diagnosis’ to see what we are already doing in an area which is at least as complex.

Handing over the deal making to the robots may take out all that fun, testosterone fuelled element, but I believe there’s a good chance it would give us better and more manageable deals. If the robot is monitoring and managing the deals, they become much more saleable assets if anything does go wrong for the prime contractor.

Just as AI in medicine is seen as an aid for medical professionals rather than a replacement for them, we would still need to keep human control over the deal making process, at least initially.

This is a risk in itself, as politics and vested interests can still win out over rational, fact-based reasoning; as far as I can see, ‘No-one ever got fired for hiring IBM’ syndrome is alive and well and living in boardrooms worldwide.

If we can overcome this kind of cultural obstacle I think we can get better, more sustainable deals, avoid more Carillions and nuke fewer people’s livelihoods and pensions.

Once the robots have a toehold in decision making, the scope of executives’ roles start to shrink. If the robots are driving the deals, why shouldn’t they run the whole business?

And if the CEOs have already automated or outsourced the rest of the business, as a shareholder I’m going to be seeing one last fixed cost to be tackled; the one that ends up in an executive bank account. So I’m going to start questioning why they can’t automate or outsource themselves.

So the CEOs can join the rest of us doing good works and singing songs around the camp fire. And if they start to miss the cut and thrust and the thrill of beating the other guy, there’s always the golf course.




One thought on “Carillion, complexity and the dawn of the robot CEO?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s