Skip to content

The Achilles Heel of Artificial Intelligence

Achilles Heel of AI.png

In a previous blog, I discussed how Artificial Intelligence (AI) today merely has specific intelligence as opposed to generalized intelligence. This means that an AI process can appear quite intelligent within very specific bounds yet fall apart if the context in which the process was built is changed. In this blog I will discuss why adding an awareness of context into an AI process – and dealing with that context – may prove to be the hardest part of succeeding with AI. In fact, handling context may be the Achille’s heel of AI!This discussion will expand upon some points made within a recent IIA research brief.

THAT’S IMPRESSIVE! … OR IS IT?

Consider a picture like the one below.

It is possible to build AI processes to recognize a wide range of things about the image. Examples include:

  • This is a tennis game

  • There are two females playing

  • They are wearing tennis skirts

  • One of them has on a cap or visor

  • One player just hit the tennis ball

Seeing all of those facts identified in an image automatically is certainly impressive and I saw a gentleman at a conference discuss a scenario just like this. However, there were examples discussed where what at first seemed impressive did not hold up under scrutiny.

Imagine that one process identifies if someone just hit a tennis ball based on image analysis and therefore led to the final bullet. Another process identifies if someone is about to hit a tennis ball. Given the picture above, if the system is asked to determine if the ball was just hit, it will come back with a very high confidence “yes”. However, if the system is asked if the ball is about to be hit, it will also come back with a very high confidence “yes”.

Clearly, only one of those statements can be true! If asked only one of the questions, the AI seems quite impressive. If asked both, a major weakness is exposed by the impossibly inconsistent answers. In reality, of course, the player either a) just hit the ball, or b) is just about to hit the ball. She can’t be doing both at the same time.

WHAT WENT WRONG?

Keep in mind that humans can make the exact same mistake. If I asked you with no context, “Did she just hit the ball?”, you might well respond “yes” too. Upon further reflection, however, you would realize that she is clearly either 1) about to hit the ball, or 2) has just hit the ball. It isn’t possible to tell just from the information in the picture which answer is the correct answer. We would need to see a few frames over time to tell if the ball is approaching or leaving the racquet. We need more context.

Humans may be smart enough with our generalized intelligence to recognize this with some thought, but AI processes are not able to do it. In order to allow recognition of such a situation, AI processes would need to be trained to identify situations where more information is needed to get the right answer. But, think about the complexity of adding this context into those processes. How would you train an AI process to:

  • Recognize when it needs more information?

  • Recognize when the context of a new example it is being asked to score differs from the training data’s context?

  • Suggest what information might be helpful to resolve the ambiguity?

To make matters worse, in our tennis example the issue with ambiguous context is only glaringly obvious once the two conflicting questions about hitting the ball are asked. It is easy to miss the problem if you’re only asked one question. How do you teach an AI process to identify when conflicting questions exist (even if they have not been asked) and what to do in each case?

THE HUMAN TOUCH

We could devise a range of similarly ambiguous questions tied to this same image. Accounting for every eventuality explicitly is almost impossible. That’s why humanity’s general intelligence is so powerful. We can think critically, derive new questions, and piece together disparate information to recognize and account for context within these situations. Adding such an awareness of context into an AI process today is incredibly difficult if not impossible.

The critical take away here is that AI can appear to be very, very smart within the context in which it was trained. But, it is only smart within that exact context. In the tennis example, we end up with answers to different questions that can’t all be true given a larger context, but that are quite reasonably true within the more limited context of just a single question at a time.

To account for such situations, it would be necessary to train an AI process within the larger context and also to train it to identify areas where it has to ask for more information. That’s a tall order. Barring that approach, care must be taken by humans to only use the AI process within the bounds of the context in which it was trained. The onus is upon us to handle the contextual issues.

THE PUZZLE WE’RE LEFT WITH

The reality is that it can be hard for even us humans to be certain we’re keeping the use of an AI process (or any other type of analytics process) within the correct context. Getting algorithms to handle context automatically is much harder than getting them to do what they are trained to do. Hence, my assertions that context is an Achille’s heel of AI and that the onus today is on humans to handle contextual issues.

If you take time to really consider what it would take to teach context to an AI process, you will quickly realize that the problem is a big, nasty one. As is true with all types of analytics, everything works fine as long as all the inputs are as expected and are within the expected context. However, as soon as deviations occur, all bets are off. We must be as careful as ever to identify and / or avoid those deviations from the expected context as we continue to roll out AI to new areas. We can’t yet depend on AI to do it for us.