My jaded post-festive brain cells have been sparked into life by an excellent, thought-provoking LinkedIn post from Jess Hadleigh off the back of this year’s Consumer Electronics Show.
In part, it’s about moving from the Internet of Things to the ‘Intelligence of Things’ – a world of connected devices with their own, out of the box artificial intelligence capability.
An exciting prospect which, for this confirmed IT dinosaur at least, also brings a feeling of déjà vu.
The history of IT is full of the ebb and flow of processing from the centre to the edge to some combination of the two, a kind of slow Hokey Cokey of intelligence.
When I started out as a programmer in the early 1980s, my first direct interactions were with what we’d now call a connected device – a dumb, green-screen terminal connected to a CPU sitting in a data centre a mile away.
Actually, my very first interactions as a programmer were with a bit of glorified graph paper, an HB pencil and a well-worn eraser, but I hesitate to admit this as it makes me feel like Methuselah.
A few years later the first PC appeared in the office; great for presentations and word processing, but with no connectivity to speak of. We were still bound to our green screens, albeit with some of the front-end work now being handled by a mini-computer in the next office.
That was the start of an evolution from centralised to distributed processing which took us through the era of client server, and ultimately on to the Cloud.
Now, as I sit here writing this on my laptop in a digital world exponentially richer than when I started out in IT, it wouldn’t be much of an oversimplification to say that things have come full circle.
I reckon around 80% of my laptop’s function is to connect rather than to compute, and if I was using Office 365 it would be close to 100%. I’d love to say that I write my blog longhand with a goose quill and have my clerk type it in for me, but I’m not quite that much of a Luddite.
So much for the stroll down memory lane. The question is whether and how this history of ebb and flow should influence the design of this ‘Intelligence of Things’, which I am going to take the liberty of calling the ‘Internet of Intelligent Things’. This is partly to reflect the role that connectivity will still play in it, partly to avoid hashtag confusion (although I guess #IoIT could take us down a dangerous path of Seven Dwarfs based puns).
I believe we should heed this history, because it reflects the inevitable trade-offs we always face in deciding where to put the intelligence and processing power. I think the best way to manage these trade-offs is to apply the fundamental IT design and architecture principles which have served us well for so long.
Hardly an earth-shattering conclusion, but I believe there’s a natural tendency to think of ‘out of the box’ intelligent devices mostly in terms of their autonomous capabilities rather than their connectivity, and if we get carried away by this it could lead us to make sub-optimal design decisions.
I think it’s fair to say that most people now regard the computer on their desk as a communication device at least as much as a data processing machine.
Will we see the intelligent machines of the #IoIT in the same light, or will we think of them more as robots, which we still tend to perceive as having largely autonomous intelligence?
In ‘The Hitchhiker’s Guide to the Galaxy’, when Marvin the Paranoid Android mentions he has ‘a brain the size of the planet’, I’m pretty sure we all assume he’s referring to his inbuilt mental capacity rather than his connectivity to a rich set of Cloud services.
Luckily, we professional IT people would never be swayed by such stereotypes. When it comes to designing the #IoIT we will haul out our well-thumbed volumes on data and process architecture, functional and non-functional requirements and building customer stories, and let them lead us.
We know that where we put the intelligence will create trade-offs in data and process design, security, operability, maintainability, scalability and all the rest of those design considerations we know and love. We just need to weigh up these trade-offs and put the intelligence where it delivers greatest net benefit.
And that list now has to include environmental impacts. This is not just around the sustainability of the hardware, it’s about how efficiently the intelligence itself is delivered.
Every interaction with the #IoIT will have an environmental impact in terms of device, server and network capacity and power; even with renewable sources of energy, I believe there will be a growing expectation that every interaction is designed to have the smallest possible environmental footprint.
To illustrate a few of these potential design trade-offs from the customer perspective, take my fridge, or at least what I might expect from an #IoIT fridge if I had one.
I expect my fridge to know exactly what I’m eating now, when to order more and when not to (right now I could really do without any more Stilton), but I don’t expect it to sulk for 30 minutes a month while it does software updates.
I expect it to protect me from someone plastering my dietary habits all over the Internet (believe me, that’s something no-one wants to see), but at the same time, if my fridge is unexpectedly destroyed (happens all the time, don’t ask), I expect my new fridge to come pre-loaded with full knowledge of what I eat.
I expect its understanding of what I eat not to diverge for a nanosecond from that of my supermarket delivery provider, my personal trainer and my social media profile.
This kind of customer expectation tends to lead to design dilemmas over whether the processing power should sit at the edge, in the centre, or shared between both. That’s not a reason to hold back on exploiting the capability of intelligent devices, it just means we need to approach the design with our eyes wide open to the trade-offs we’re likely to need to make.
A good start might be to design non-functional intelligence capabilities into #IoIT devices so they can optimise their own interactions and judge for themselves when to act autonomously and when to plug into the wider intelligence of the Internet.
My first experience of #IoT was around putting eBeacons into airports. I recall one of the big selling points was the lack of intelligence in the devices themselves, making them cheap, simple to maintain and easy to integrate with existing tech.
Clearly we’ve moved on from this and we’re ready for the #IoIT; I believe we just need a clear-eyed understanding of the trade-offs around where we put the intelligence, and to be prepared to design for them.
Now if you’ll excuse me, I’m off to have words with my fridge regarding an unfortunate disagreement over a giant Toblerone.