She first saw the ad on Facebook. And then again on TikTok. After seeing what appeared to be Elon Musk offering an investment opportunity over and over again, Heidi Swan figured it had to be true.
“Looked just like Elon Musk, sounded just like Elon Musk and I thought it was him,” said Swan.
She contacted the company behind the pitch and opened an account for more than $10,000. The 62-year-old healthcare worker thought she was making a smart investment in cryptocurrency from a businessman and investor worth billions of dollars.
But Swan would soon learn she’d been scammed by a new wave of high-tech thieves using artificial intelligence to create deepfakes.
Even looking back at the videos now, knowing they’re fakes, Swan still thinks they look convincing.
“They still look like Elon Musk,” she said. “They still sound like Elon Musk.”
Deepfake scams on the rise
As artificial intelligence technology evolves and becomes more accessible, these kinds of scams are becoming more common.
According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year and could reach $40 billion in the U.S. by 2027.
Both the Federal Trade Commission and the Better Business Bureau have issued warnings that deepfake scams are on the rise.
A study by AI firm Sensity found that Elon Musk is the most common celebrity used in deepfake scams. One likely reason is his wealth and entrepreneurship. Another reason is because of the amount of interviews he’s done; the more content there is of someone online, the easier it is to create convincing deepfakes.
Anatomy of a deepfake
At the University of North Texas in Denton, Professor Christopher Meerdo is also using artificial intelligence. But he’s using it to create art.
“It’s not going to replace the creative arts,” Meerdo said. “It’s going to just augment them and change the way that we understand things that we could do in the sphere of creativity.”
Even though Meerdo sees artificial intelligence as a way to be innovative, he sees its dangers.
Meerdo showed the I-Team how scammers can take a real video and use AI tools to replace a person’s voice and mouth movements, making them appear to say something completely different.
Advances in technology are making it easier to create deepfake videos. All a person who’s familiar with AI needs to make one is a single still image and a video recording.
To demonstrate this, Meerdo took a video of investigative reporter Brian New to create a deepfake of Elon Musk.
These AI-generated videos are hardly perfect, but they just need to be convincing enough to deceive an unsuspecting victim.
“If you are really trying to scam people, I think you can do some really bad things with this,” Meerdo said.
How can you spot a deepfake?
Some deepfakes are easier to spot than others; there can be signs like unnatural lip movements or odd body language. But as the technology improves, it will get harder to tell just by looking.
There are a growing number of websites claiming they can detect deepfakes. Using three known deepfake videos and three authentic ones, the I-Team put five of these websites to an unscientific test: Deepware, Attestiv, DeepFake-O-Meter, Sensity and Deepfake Detector.
In total, these five online tools correctly identified the tested videos nearly 75% of the time. The I-Team reached out to the companies with the results; their responses are below.
Deepware
Deepware, a website that’s free to use, initially failed to flag two of the fake videos the I-Team tested. In an email, the company said the clips used were too short and that for the best results, videos uploaded should be between 30 seconds and one minute. Deepware correctly identified all videos that were longer. According to the company, its detection rate is considered good for the industry at 70%.
The frequently asked questions section of Deepware’s website states: “Deepfakes are not a solved problem yet. Our results indicate the likelihood of a specific video being a deepfake or not.”
Deepfake Detector
Deepfake Detector, a tool that charges $16.80 per month, identified one of the fake videos as “97% natural voice.” The company, which specializes in spotting AI-generated voices, said in an email that factors like background noise or music can impact results, but it has an accuracy rate of approximately 92%.
In response to a question about guidance for average consumers, the company wrote: “Our tool is designed to be user-friendly. Average consumers can easily upload an audio file on our website or use our browser extension to analyze content directly. The tool will provide an analysis to help determine if a video may contain deepfake elements using probabilities, making it accessible even for those unfamiliar with AI technology.”
Attestiv
Attestiv flagged two of the real videos as “suspicious.” According to the company’s CEO Nicos Vekiarides, false positives can be triggered by factors like graphics and edits. Both authentic videos flagged as “suspicious” included graphics and edits. The site offers a free service, but it also has a paid tier, where consumers can adjust settings and calibrations for more in-depth analysis.
While he acknowledges that Attestiv isn’t perfect, Vekiarides said that as deepfakes become harder to spot with the naked eye, these kinds of websites are needed as part of the solution.
“Our tool can determine if something is suspicious, and then you can verify it with your own eyes to say, ‘I do think that’s suspicious,’” Vekiarides said.
DeepFake-O-Meter
DeepFake-O-Meter is another free tool supported by the University of Buffalo and the National Science Foundation. It identified two of the real videos as having a high percentage of being AI-generated.
In an email, the creator of the open platform said a limitation of deepfake detection models is that video compression can lead to sync issues with video and audio and inconsistent mouth movements.
In response to a question about how everyday users can use the tool, the company emailed: “Currently, the main result shown to users is the probability value of this sample being a generated sample across different detection models. This can be used as a reference if multiple models agree on the same answer with confidence (e.g., over 80% for AI-generated or below 20% for real video). We are currently developing a more understandable way of showing the results, as well as new models that can output comprehensive detection results.”
Sensity
Sensity’s deepfake detector correctly identified all six clips, showing a heatmap indicating where A.I. manipulation is most likely.
The company offers a free trial period to use its service and told the I-Team that while it’s currently tailored for private and public organizations, its future goal is to make the technology accessible to everyone.