Organizations like Singularity University are focused on what they call “exponential technologies,” for which “the power and/or speed doubles each year, and/or the cost drops by half.” They classify AI as exponential, but alas it is not. Ray Kurzweil, a co-founder of Singularity University, claims that the “singularity” for AI—the time when machines can do every intellectual task better than humans—will come in 2029. But virtually no other AI expert is that optimistic (or pessimistic), as revealed in Martin Ford’s new collection of interviews with them, Architects of Intelligence.
We’ve been pursuing AI for over 60 years (the famous Dartmouth conference kicking it off was in 1956), and if it were an exponential technology it would have already conquered the world. It’s certainly getting better, but at a linear rate. We have more algorithms now, and of course lots more data. We have much more powerful processors on which to crunch the data. But artificial intelligence on a broad scale has been a stubbornly difficult objective. I argue in my recent book that it will be revolutionary over the long term, but only evolutionary in the short run.
The linear, evolutionary rate of AI improvement can also be seen with regard to particular objectives like autonomous vehicles. Manuela Veloso, the head of machine learning at Carnegie Mellon (and now on leave from that job to be head of machine learning at J.P. Morgan Chase), describes its progress in terms of her own career. “When I came to Carnegie Mellon in the mid-1980s,” she said at a recent conference at MIT, “everyone said that autonomous vehicles were just around the corner. Thirty years later they are still just around the corner.” As Christopher Mims, the technology writer for the Wall Street Journal, put it in a recent column, “Hardly a week goes by without fresh signposts that our self-driving future is just around the corner. Only it’s probably not. It will likely take decades to come to fruition.”
Many managers seem to expect exponential progress from AI, and exponential results as well. In a 2017 Deloitte survey of “cognitive aware” US executives, 76% of respondents said they believe that cognitive technologies will “substantially transform” their companies within the next three years. 57% said that their industry would be transformed within three years. That’s a really short period of time! I see zero evidence that company or industry transformation is happening that quickly. Indeed, managers are starting to realize this. In the 2018 survey by Deloitte, 20% fewer expected transformation in three years for both their companies and their industries. I’d say they are still too optimistic, however. Major transformation of companies and industries from the Internet—the last great transformational technology—typically required a decade or more.
The IT press, after years of mucho AI hype, is also beginning to describe the reality of the technology. Ed Burns of Tech Target, for example, wrote two articles recently (one here and one here) arguing that many companies are beginning to realize that AI may well be a “game changer,” but it’s going to take quite a while to change the game. In the second piece I mention, he wrote:
Thanks to all the hype that has built up around AI functionality in the last couple years, some enterprises may expect quick and substantial gains from the technology. But that's not likely to be the case. While we have the foundational elements for AI success, building effective tools and using them in ways that move the dial on real business problems can be a long process.
There are several reasons why AI is improving at a linear rate rather than an exponential one. AI supports tasks rather than entire jobs or processes, so it will take a lot of projects to make much of a difference in organizational performance. Most machine learning applications require substantial amounts of training data to learn how to fit models well, and that data isn’t easily available to most organizations. Some AI technologies—particularly language-oriented ones—require heavy investment of time and resources to build a “knowledge graph.” Some AI technologies, like deep learning, are still in the relatively early stages of their development. And, of course, people who know AI methods and tools are in short supply. It all adds up to slow but meaningful progress.
What are the implications of AI’s linear improvement? Will the mismatch between the hype and actual improvement levels yield broad disenchantment? Though some expect another “AI winter,” it seems to me that there is enough adoption and development in the space that it won’t happen. Some of the less dedicated companies will probably drop or cut back on their AI investments, which will put them further behind in the race to get value from it. Gartner’s 2018 hype cycle predictions—a useful concept, but highly subjective in execution—correctly disaggregates AI into multiple components. It views autonomous vehicles and intelligent assistants as further along in the cycle than deep learning, which is supposedly at peak hype now. The “trough of disillusionment” for autonomous vehicles may be particularly deep given the enormous hype about them in the press and the heavy spending by companies and investors. I have certainly noticed more negative articles recently about autonomous vehicles than any time in the recent past.
But slow and steady wins the race, as they say. Some companies, like Procter & Gamble and American Express, started exploring AI many years ago and never really dropped it when the last period of broad retrenchment came. Certainly there is no evidence at all that aggressive tech adopters like Google and Amazon are easing up on AI. I have always liked Jeff Bezo’s 2017 comment that “much of what we do with machine learning happens beneath the surface….– quietly but meaningfully improving core operations.” That’s the right formula to succeed with AI—linear progress + steady investment = lots of value.