Research

The Dilemma of Unexplainable Artificial Intelligence

By Bill Franks, Jul 13, 2017

Artificial intelligence has quickly become one of the hottest topics in analytics. For all the power and promise, however, the opacity of AI models threatens to limit AI’s impact in the short term. The difficulty of explaining how an AI process gets to an answer has been a topic of much discussion. In fact, it came up in several talks in June at the O’Reilly Artificial Intelligence Conference in New York. There are a couple of angles from which the lack of explainability matters, some where it doesn’t matter, and also some work being done to address the issue.

AI Explainability From The Analytics Perspective

From a purely analytical perspective, not being able to explain an AI model doesn’t matter in all cases. To me, the issue of explainability is very similar to the classic problem of multicollinearity within a regression model. I recall having drilled into my head in graduate school the distinction between (1) Prediction and (2) Point Estimation.

If the main goal of a model is to understand which factors influence an outcome and to what extent, then multicollinearity is devastating. The variables that are inter-correlated will have very unstable individual parameter estimates even when the model’s predictions are consistent and accurate. Conceptually, the correlated variables are almost randomly assigned importance. Running the model on one subset of data can lead to very different parameter estimates from another subset. Obviously, this is not good and we spent a lot of time learning how to handle such data to get an accurate answer and also be able to explain it. The point is that multicollinearity made it very hard to pinpoint the drivers of the models, even if the models were extremely accurate.

This is very much like artificial intelligence. You may have an AI process that is performing amazingly well. However, accurately teasing out what factors are driving that performance is difficult. As I’ll discuss later, there is work being done to help address this. But, AI models leave one in a similar spot as multicollinearity did. Namely, a great set of predictions whose root drivers can’t be well explained and specified.

Notice, however, that this issue only matters if you need to explain how the answer is derived. Multicollinearity is not a problem if all you care about is getting good predictions. If the individual parameters don’t matter, then model away. The same is true with AI. If you only care about predicting who will get a disease, or which image is a cat, or who will respond to a coupon, then the opacity of AI is irrelevant. It is important, therefore, to determine up front if your situation can accept an opaque prediction or not.

What’s Being Done To Make AI More Explainable?

As one would expect, there are a lot of smart people working to develop ways to help determine what’s really driving an AI model under the hood. One of the most interesting examples I’ve come across is a process known as Local Interpretable Model-Agnostic Explanations (LIME). What LIME does is to make slight changes to input data in order to see what the impact on the predictions ends up being. Repeat this many times and eventually, you get a good feel for what is really driving a model. See the picture below from the above linked LIME article to get an understanding of what we’re talking about.

In this case, you can see that the upper face and eyeball subset at the top of the image has a strong influence in the model determining that this is a frog. Some of the other information in the picture actually causes the model to do worse. For instance, the heart being held in his hand certainly wouldn’t be typical of a frog.

While this example focuses on image recognition, a very similar process could be used with a problem based on classic data. For example, the difference in predicted response probability for a customer could be examined as input variables are perturbed in different combinations. Certainly, this isn’t quite as satisfying as a classic parameter estimate. But, it does take AI a long way towards being understood.

Note also one very important point about LIME, which is the “model agnostic” component. LIME really has nothing directly to do with AI and doesn’t know what AI is or does. It is simply a way to take a predictive algorithm and test out how different data causes changes. Therefore, it can be applied to any situation where there is a need to add transparency to an opaque process. It can even be used in situations where firm parameter estimates do exist in order to validate how well it works.

The Problems That Won’t Go Away

No matter how neat LIME might seem, it isn’t good enough to pass muster with laws and regulations. In many cases, such as credit scoring and clinical trials, amazing predictions mean nothing in absence of clear explanation of how the predictions are achieved. As a result, we’ll have to examine our laws and our ethical guidelines to determine how they might be altered to allow AI to be utilized effectively while still keeping the proper checks and balances in place. We are certain to have AI that will be capable of solving very valuable problems sooner than we’ll be allowed to actually put those models to use. It will be necessary to find the right balance of laws, ethics, and analytics power so we can make progress. But, that’s a topic for another blog!

For now, if you’re considering using AI as it exists today, just make sure that what you really care about is simply a solid model that predicts well. If you actually have to be able to explain how the model works and what drives it, you should stick to more traditional methods for the foreseeable future.

Originally published by the International Institute for Analytics

About the author

Author photo

Bill Franks is IIA’s Chief Analytics Officer, where he provides perspective on trends in the analytics and big data space and helps clients understand how IIA can support their efforts and improve analytics performance. His focus is on translating complex analytics into terms that business users can understand and working with organizations to implement their analytics effectively. His work has spanned many industries for companies ranging from Fortune 100 companies to small non-profits.

Franks is the author of the book Taming The Big Data Tidal Wave (John Wiley & Sons, Inc., April, 2012). In the book, he applies his two decades of experience working with clients on large-scale analytics initiatives to outline what it takes to succeed in today’s world of big data and analytics. Franks’ second book The Analytics Revolution (John Wiley & Sons, Inc., September, 2014) lays out how to move beyond using analytics to find important insights in data (both big and small) and into operationalizing those insights at scale to truly impact a business. He is an active speaker who has presented at dozens of events in recent years. His blog, Analytics Matters, addresses the transformation required to make analytics a core component of business decisions.

Franks earned a Bachelor’s degree in Applied Statistics from Virginia Tech and a Master’s degree in Applied Statistics from North Carolina State University. More information is available at www.bill-franks.com.


Tags

2017 “ANNY” Excellence in Analytics Award

Apply for the 2017 ANNY

The ANNY award recognizes organizations’ applications of advanced analytics to drive measurable business results at the project or organizational level. To apply for the ANNY, complete the application and email it to anny@iianalytics.com.

Learn more »

Unbiased Actionable Insights

Accelerate your organization’s journey to analytics maturity

Get the data sheet to learn how the Research & Advisory Network advances analytics capabilities and improves performance.

Download data sheet »

Become a RAN Client

Get answers to your toughest analytics questions with IIA's Research & Advisory Network.

»