I’ve written in the past about how fuzzy the line between “good” and “evil” data science and artificial intelligence (AI) can be. Ethical issues arise with AI that are neither clear cut nor easy to navigate. One of the popular ways to mitigate the risks of AI is to pursue a “human in the loop” strategy where people still retain ultimate decision authority for major decisions. In this blog, I’ll explain why that approach may be doomed for failure as a primary tool for stopping “evil” AI from being deployed.
Human In The Loop Sounds Great!
The concept of human in the loop AI is straightforward. The idea is that while an AI process may work its magic to determine what actions make sense, a human will still make the final call. This concept has tremendous appeal in high impact situations. For example, an AI algorithm might flag what it thinks is cancer on a medical image. However, a human doctor will still validate that they also think it is cancer before taking a patient into surgery. Similarly, an AI algorithm might make an initial estimate of automobile or home damage to get a customer an insurance check quickly. But, a human adjuster will still finalize the payment amounts.
Very few readers would question the wisdom of including a human in the situations above and other similar situations. However, this approach doesn’t scale to support many of AI’s real-world applications, and it can force our hand to turn over decisions to the AI process completely. Let’s see how.
Can It Scale?
Human in the loop procedures work great when there are a “manageable” number of decisions to be made in a “reasonable” amount of time. Both previously mentioned examples fall into that category. After all, even a busy doctor only needs to review so many medical images on a given day. And, the decision to pursue cancer treatment (or not) is time sensitive at the day, week, or month level and not at the millisecond, second, or minute level.
The problem arises when there are many decisions that need to be made and they need to be made very quickly. In such situations it isn’t possible to keep a human in the loop. Consider autonomous vehicles. Autonomous vehicles must ingest and analyze masses of data in real time to decide to accelerate, brake, or turn. It just isn’t possible to have a human double check and approve each and every decision. In such settings, human in the loop is simply not realistic.
Note that the issue of scale causing trouble isn’t new or unique to AI. My book The Analytics Revolution was centered on the challenges of creating and deploying fully automated analytical decision processes, albeit in a pre-AI context. The same issues and challenges apply squarely to AI, however. Luckily, there are many situations where even fully automated AI decisions don’t pose much of a threat. For example, when serving up offers on a website, the cost of mistakes is low and a human in the loop isn’t really needed. The major risks arise when decisions are simultaneously high impact, high speed, and high volume such as in an autonomous vehicle setting.
What About The Ethical Considerations?
One topic discussed in 97 Things About Ethics Everyone In Data Science Should Know was lethal autonomous weapons systems (LAWS). Most people soundly oppose the deployment of LAWS because the concept aligns so closely with the idea of Terminator robots killing people at will. Of course, that is a possible outcome if we aren’t careful about deploying LAWS technologies. In today’s weapons systems, algorithms do intensive analysis of potential targets and make a recommendation. A human must still give the final approval to use lethal force, however.
On the surface that approach sounds great. However, what if one country deploys a swarm of deadly drones that are set to target and kill civilians rapidly and automatically. A country attempting to defend against these drones would not succeed if each shot had to be manually approved as swift action against the enemy drones would be necessary.
In effect, we could see an arms race of sorts where we’re forced to deploy LAWS technologies making heavy use of AI and other analytics whether we want to or not. Even if everyone could agree it wouldn’t be ethical to be the first to deploy such technologies, do the ethics change if you’re deploying LAWS technologies in an effort to defend against other LAWS technologies? There are no easy answers, and it is certainly an area that will be controversial regardless of the direction taken.
Where Does This Leave Us?
At first, keeping humans in the loop sounds like a terrific safety check for AI. In reality, it won’t scale to support the real-time and highly scaled processes that are already being created. We need to pivot to having ongoing human monitoring of overall AI process performance, along with kill switches that can be thrown if trouble arises, as opposed to having humans in the loop for every decision. As a result, those of us in the industry need to start to think through the alternatives to human in the loop processes and how to best implement them safely and ethically.
The most important component of any approach will be a detailed cost / benefit analysis that accounts for the benefits of the automated decision process alongside the costs of the inevitable mistakes. Those mistakes can be very costly in cases like autonomous vehicles or weapons systems, including costing people their lives. However, as with other public policies, we must be rational and aim for an acceptably low risk while recognizing that it is impossible to achieve absolutely no risk. Realistically, relying on human in the loop as a safety net won’t be enough so we better start planning around that reality.
Originally published by the International Institute for Analytics