I have written here here and there about “Analytics 3.0,” an environment in which companies combine big and small data at significantly greater scope and scale of analytics. And even more recently I’ve been working on a book about the third era of automation, in which smart machines take over many types of decision-making from knowledge workers (and, perhaps more importantly, about what the knowledge workers can do about it).
There is undoubtedly a relationship between these 3.0 domains, but I should also be honest and note that people like me are always defining such eras, and that the numbering of them is essentially arbitrary.
I was attracted to the 3.0 nomenclature because 2.0 is clearly overdone, and you would think that by 2015 we’d have made it to at least the third major version of almost everything. But no matter what number you use to refer to the current era, people are always asking me what the next one will be about. I feel that we just got to 3.0 in analytics and automation, and inquiring minds want to know what 4.0 will be like.
I think I have an idea, but it’s a little scary. And an academic institution no less than MIT is gearing up to address this idea (although I have a minor faculty and research appointment there, they came up with the idea entirely on their own—nothing to do with me), so that gives it some credibility.
If the 3.0 version of analytics and automation involves widespread use of them within organizations, 4.0 is about their application across pervasive, automated networks. Every business and organization in this world will be tied together with ubiquitous communications, apps, sensor networks, and APIs. The business relationships and decisions among those organizations will be governed by automated systems—automated ordering and replenishment, automated billing, automated permission to act, automated action in general. Goods and services, traffic, communications, energy, money—all of these will flow around the network in massive volumes and at unprecedented speed. No humans need apply to run these networks, since they couldn’t keep up with the activity or make decisions rapidly enough to help.
If this seems like science fiction to you, I’d argue that there are already several sectors in which 4.0 arrangements are commonplace. The trouble is that they are not working that well. Think, for example, of financial trading networks. As Michael Lewis documented in The Flash Boys, everybody’s interconnected in that industry, and almost all trading decisions are made by computer. The apparent presence of systematic bias in certain trading exchanges is not a comforting aspect of the 4.0 world. The inability of Mr. Lewis to find people who fully understand the networked and automated trading network is also less than reassuring. The least comforting fact is the presence of unpredicted and relatively unexplained catastrophic events in global finance, such as the 2010 “flash crash” and the most recent global financial crisis.
For another example, take the electrical grid in large companies like the U.S. Every electrical utility is networked to others through a series of regional ISOs (independent system operators). The supply and demand of electricity is shared, shed, and synchronized across the entire system through rapid and automated decisions. But wait—we still have power outages for some reason. In fact the data on outages suggests that we have more today than we did when we slow, dim-witted humans managed the electrical grid.
For one final example, take our air transportation system. Airlines were among the earliest industries to apply analytics and automated decisions to their operations. A large airline’s route system is far too complex to be governed only by human beings, with thousands of airplanes landing and taking off in a carefully orchestrated ballet across continents. But an unexpected major snowstorm near a single hub city can still tie the entire system into knots. Of course, our air traffic control system is still a 1.0 artifact, so that could be part of the problem.
These dynamic, interconnected automated systems constitute the 4.0 world. Such domains are scary because a relatively small problem can be amplified and extended throughout the entire complex system. If you’ve ever played the “Beer Game” — a “system dynamics” simulation of a beer producer and its supply chain developed at MIT — you understand the problem on at least a small scale.
Perhaps given its background in studying system dynamics, MIT has decided to address this set of problems and research opportunities (here’s a report on the decision to do so). The goal is to understand how such complex systems function and malfunction, and to bring the tools of data science and statistics, social network and behavior analysis, and complex engineered systems analysis to bear on them.
This huge set of problems, of course, cuts across traditional academic boundaries, so MIT is having some difficulty deciding what to call the initiative. Thus far the Institute has referred to it only as the “new entity,” and it has put Munther Dahleh, a professor affiliated with electrical engineering and computer science, engineering systems, and information and decision sciences, in charge. I expect some interesting and useful perspectives and insights to emerge from this new entity once it gets cranked up.
In the meantime, don’t be too anxious to get into the world of analytics and automation 4.0. Most organizations have a tough enough time mastering 3.0—not to mention 1.0 and 2.0. But it may be useful to have a sense of what may be coming next to your industry and company. Let’s hope that we understand it better by the time you arrive there.
Originally published in WSJ’s CIO Journal