New year heralds major threat from deepfake videos: Internet Scambusters #1,049
No apologies for returning to the topic of deepfake videos, which look like they are becoming one of the major scam threats for 2023.
In addition to the topics we've previously reported, we have news of the latest tactics being used by crooks and scammers in this week's issue.
We'll also point you in the right direction so you can adopt a new, skeptical approach to what you see on your computer screen.
Let's get started…
How Deepfake Videos Could Land You In Trouble
One of the big challenges for us ordinary mortals in 2023 is to be able to tell deepfake videos from the real thing.
Over the past year, the artificial intelligence behind deepfakes - digitally altered videos of a person usually saying something controversial - has come on by leaps and bounds.
This is particularly bad news in a year running up to the 2024 presidential election. We can expect to see all manner of deep fakes pretending to be speeches delivered by political candidates.
And that's not all. We already reported how deep fake technology has been used to trick investors into believing they were watching a company boss making financial statements. They're also being used in job interviews (see Holograms and Deepfake Videos Used in Job and Investment Scams). And a couple of months ago, the Mayor of Berlin was tricked into thinking he was talking to his opposite number in Kyiv.
Moreover, a growing concern is that the technology will be used to fool us ordinary folk in response to convincing videos. Security experts say it won't be long before fake videos can run in real time, with the digital character behaving and answering questions in live situations.
One scenario reported by online tech site ZDNet happens when an employee believes they're speaking to their boss on camera, who tells them to confidentially transfer money to a particular bank account.
This scam used to be called the business email compromise (BEC), but many firms wised up to the trick.
"But if cyber criminals could use a deepfake to make the request, it could be much more difficult for victims to deny the request because they believe they're actually speaking to their boss on camera," says the tech site.
All the crooks need to do is scour the internet for a genuine video of the person they want to imitate. Artificial intelligence does the rest.
According to the Wall Street Journal, this has already happened in the UK with a voiced deepfake when a company CEO thought he was talking to the boss of his parent company and transferred hundreds of thousands of dollars to an untraceable Hungarian account.
Could You Spot A Deepfake?
It's going to be crucial to spot the fakes. But could you? Possibly - if you're paying sufficient attention.
Try this. In November, a video purporting to feature actress Scarlett Johansson was posted on Twitter, alongside a genuine video of the actress. Although it was possible to tell which was the real person, the deep fake by itself would have been truly convincing.
The biggest challenge in spotting a deepfake is our willingness to believe what we see, usually without even thinking about it.
So, the first thing a viewer should do before reposting a video or acting on an instruction from someone on a video call is to slow down and mentally question what you're seeing.
One simple tactic in a video interview or meeting that has just surfaced is to ask the other person to turn their face sideways by a full 90 degrees. According to AI specialists Metaphysic, the technology is good at doing front-facing shots but struggles with side profiles. This tends to cause distortions to the image.
Another good trick is to ask the video subject to wave their hands in front of their face. Artificial intelligence takes a little time to work out what's happening, resulting in distortion and poor lip synchronization.
It may seem strange that you might have to ask someone to perform these actions but it's a whole lot more comfortable than having to explain how you transferred money to an untraceable account.
Experts say that companies will have to create protocols such as these for their own corporate security.
There are still a lot of other things you can do to spot a deepfake. See our earlier report for more tips: Deep Fake Videos Threaten Turmoil for All Users.
Sadly, detecting a deepfake is going to become a bigger than ever source of scams as AI catches up with the few signals that we mention above.
Rachel Tobac, of the online security firm SocialProof Security, recently declared that we're nearing the stage where deepfakes become totally undetectable.
She tweeted: "Deepfakes will impact public trust, provide cover and plausible deniability for criminals/abusers caught on video or audio, and will be (and are) used to manipulate, humiliate, and hurt people."
The best hope lies with AI itself. Several companies have taken a lead in developing AI software that can analyze videos in much greater detail than the human eye can, detecting inconsistencies that occur in a matter of seconds.
That's all for today -- we'll see you next week.