Many people and companies seem to think of “cognitive computing” as a separate area from analytics. Most large organizations today have significant analytical initiatives underway, but they think of the cognitive space as being an exotic science project. One executive told me, “We have no desire to win Jeopardy,” an allusion of course to the IBM Watson project from 2011. But cognitive computing is not just about Watson, and it’s not an exotic science project.
In fact, I’d argue that cognitive computing is a straightforward extension of analytics work. It’s the logical next step for any organization that has been pursuing traditional analytics, i.e., analytical models driven by human hypotheses. Any organization that wishes to improve the speed and scale of its analytical activities should be exploring at least some cognitive capabilities now. Cognitive methods are a straightforward extension of previous analytical methods, and there are several reasons why they’re better for many applications.
Most cognitive methods are, in fact, based on statistical models. Your organization may be undertaking “cognitive” work without even knowing it. Perhaps, for example, you’re using some form of “machine learning,” which attempts to automatically improve the fit of models and “learns” its way to a better set of explanations or predictions. Machine learning often uses logistic regression, a statistical method that has been around since the 1930s. Automated fitting of models has been around only since about 1957, when Cornell researchers created the “perceptron.” That same invention was the beginning of neural networks as well, which are the basis of the “deep learning” approaches used by many cognitive applications today. So all of these cognitive approaches have deep roots in statistical approaches that are very familiar to analytical folks.
Since cognitive tools are only a small step from traditional analytics, it’s not surprising that vendors are mixing the two. IBM, for example, is clearly fuzzing the line between analytics and Watson with “Watson Analytics.” The market leader in traditional analytics, SAS, offers a machine learning capability as well as event streaming for automated analytics. Tibco, a leader in both event streaming software and analytics, is increasingly focused on “streaming analytics” for real-time automated decision-making—what it calls “Fast Data.” For these vendors and others, “cognitive” and “analytics” are increasingly intertwined.
The other key benefit of cognitive technologies is that they can solve some problems that traditional analytics can’t. In the world of big data, for example, the data from sensors, social media, and online applications often flows and accumulates much faster than humans could possibly analyze or act on it. Without machine learning to create the models for such data, it couldn’t be analyzed at all.
The great challenge for human-centric analytics has always been that many human decision-makers often don’t use the models and data provided to them. Researchers Martha Feldman and James March published an article as far back as 1981 arguing that managers often ask for information that they don’t use. They want to appear to be making analytical decisions, but are more comfortable with their intuition. Therefore it’s important to bypass human decision-makers with automated decisions when we know that data and analysis are critical to decision outcomes.
If your organization is interested in moving in a more cognitive direction and you’re already doing work with analytics, there are some easy steps to get started. Machine learning algorithms are available in the cloud on Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. Google and Microsoft have released open-source versions of their machine learning tools (TensorFlow and Computational Network Toolkit, or CNTK, respectively). These tools facilitate exploration of “deep learning” neural network applications like speech and image recognition.
Assuming that your people could use some instruction in machine learning and neural networks, that’s not much of a barrier as it might have been in the past. There are now many free or inexpensive online courses in these methods. Coursera and Udacity have commercial versions; MIT, Caltech, Stanford, and other schools have noncommercial courses online. And if you want a generalized overview of machine learning, I highly recommend Pedro Domingos’ book, The Master Algorithm.
The key step is to identify some problem that might benefit from a cognitive approach. Perhaps it’s a “knowledge bottleneck”—a situation that might benefit from the application of knowledge that has been previously inaccessible. Or perhaps it’s a situation with so much data that humans couldn’t possibly handle it. Then start your experimentation with cognitive technologies on that problem. The whole process doesn’t need to be exotic, and it doesn’t have to cost very much. But it does offer a lot of potential benefits.