Skip to content

Using Artificial Intelligence to Facilitate Fraud

Any new tool or technology has the potential to be put to use for good purposes or, unfortunately, for harmful purposes. Artificial intelligence is no different. As we see the rapid progress occurring in the AI space, lots of attention has been paid to all of the good uses of AI. However, it is inevitable that those with nefarious intent are also studying AI successes with an eye toward how to twist them into tools to pursue their less than honorable goals.

IS THAT VIDEO ACTUALLY REAL?

Most people today take for granted that if you see a video of someone saying something, then there is no doubt that they said it. However, researchers have built some impressive AI processes that take as input historical video and audio of a speaker. The process then not only parses together audio that sounds authentic, but also creates a very realistic fake video of the person speaking the made-up words.

Check out this faked video of former president Obama. Then, check out this video that fakes words from presidents Obama, Bush, and Trump. Here is a video you can watch that feigns former president Obama providing an endorsement. Who is he endorsing in the fake video? The company that has created a product that can mimic people’s audio and video through AI model training! The company is quite clear that the video is fake and that it is meant as a demonstration. They are not scheming, but simply showing off their product. However, someone certainly will take such capabilities and apply them in ways they should not.

As realistic as the examples in the previous links are today, such counterfeits will clearly only get better in the future. In today’s highly charged social media and political environment, it isn’t hard to imagine someone releasing a “video” that “proves” that a candidate, celebrity, or other high value target said some really horrible things. This could take “fake news” to a whole new level. On cue, that person might have their name and career destroyed as the public isn’t conditioned to question video evidence. After all, video evidence is historically almost completely trustworthy outside of coerced comments. Even if the video can be proven later to be a counterfeit, the damage will be done.

Of course, there are good uses for AI routines that can fake a video, and that’s what the researchers in the examples are focusing on. Instead of having captions to translate what someone says into a foreign language, for example, a modified video can be made that allows people to “see” the person speaking in their own language directly. That can help world leaders reach out to other countries or foreign language speakers within their own country. Another application I have seen mentioned is enabling hearing impaired people to lip read what was said on an audio file that they can’t hear. That’s a very useful and positive application of this approach.

IS THAT ACTUALLY YOUR SIGNATURE?

For many years, our society has based contracts and other major commitments on our signatures. There have always been those who attempted to forge signatures, but it was a very manual process and one that didn’t scale. Today, scientists at University College London have an AI process that can learn to mimic anyone’s handwriting very effectively from a series of writing samples. See a video of this process at work here. If fraudsters find sufficient samples of your handwriting, you just might see your bank account drained or a fraudulent account opened because the signature provided was good enough to evade any signature validations the institution might have.

The forging of writing may be less impactful moving forward as we move away from writing by hand as a society. My children didn’t even learn cursive beyond some basics in school. And, recently, several major credit card companies announced the end of requiring signatures for purchases. When you think about how sloppy your signature looks on those electronic signature pads at stores, it is obvious that such signatures have little tangible value anymore.

There are actually a lot of good uses for faking your writing, too. If you develop really bad arthritis or otherwise severely injure your hand, you would still be able to “write” a thank you note or letter in your handwriting. Here, too, translation can be enhanced. Your original note could be translated into another language and rendered in your handwriting so that it looked original. You can write to that foreign exchange student you met in their own language with your own handwriting!

HOW DO WE ADAPT?

In no way am I suggesting that we should stop the progress of AI because it can be misused. The good will certainly outweigh the bad. What I am suggesting is that, as both an analytics community and society at large, we will need to make sure we account for how new capabilities can be used for fraud or other nefarious purposes so that we can be on the lookout.

Cars can be used to get away from a robbery and matches can be used to start a forest fire. We don’t shun cars or matches because of that. We simply punish those who are found misusing them. That is the same approach we need to follow with AI. We must encourage the progress while also setting appropriate guardrails and policies to help counteract new paths to fraud that are inadvertently created.