Deepfake is the name given technology that creates convincing copies of images, videos and voices using AI. Deepfake technologies have been developing rapidly for about five years already. The idea of creating fakes by combining real and generated data is not new. But it’s the use of neural networks and deep learning that has allowed researchers to automate this process and apply it to images, video and audio formats.
In the past, the quality of such fakes was low, and they were easily detected by the naked eye; now it’s become much more difficult to recognize a fake. This is exacerbated by a reduction in the cost of information storage and processing and the emergence of open source software. This trend makes deepfake one of the most dangerous technologies of the future.
How real can it look?
In July 2021 enthusiasts published a deepfake video of Morgan Freeman talking about the perception of reality.
It looks very realistic, but it’s not Morgan Freeman. Facial expressions, hair… all that is of a high quality and there are even no noticeable video artifacts. It’s a well-made deepfake, and it shows how easy it has become to deceive our perception of reality.
What’s the danger?
The first and most obvious area where deepfake immediately found its place was pornography. Celebrities were the first to suffer from this, but even lesser-known folks began to worry about it. Many different scenarios were assumed: school bullying, fraudulent phone calls with requests to transfer money, extortion from company managers by blackmail, industrial espionage. Early on it was viewed as a potential threat; now it’s for real.
The first known case of an attack on a business was in 2019. Scammers used voice-changing technology to rob a British energy company. The attacker impersonated the CEO and tried to steal €220,000. The second known case was in 2020 in the UAE when, also using voice deepfake, attackers managed to deceive a bank manager and steal $35 million! The scammers moved from emails and social media profiles to more advanced methods of attack using voice deepfake. Another interesting similar case became known in 2022, when scammers tried to fool the largest cryptocurrency platform, Binance. The Binance executive was surprised when he started receiving thank-you messages about a Zoom meeting he never attended. Using his public images, the attackers managed to generate a deepfake and successfully use it during an online meeting.
Thus, in addition to traditional cyberfraud techniques such as phishing, we now have a new one — deepfake fraud. And it can be used to augment traditional social engineering schemes, for disinformation, blackmailing and espionage.
According to an FBI alert, HR managers have already met with deepfakes that were used by cybercriminals while applying for remote work. Attackers can use images of people found on the internet to create deepfakes, and then use stolen personal data to trick HR managers into hiring them. This may allow them to get access to employer data, and even unleash malware in corporate infrastructure. Potentially any business can be at risk of this type of fraud.
And those are just the most obvious areas where deepfake fraud can be applied. But we all know that attackers are constantly inventing new ways to use such attacks.
How real is the danger?
All that sounds quite creepy. But is it really all that bad? Actually, not really. Creation of a high-quality deepfake is an expensive process.
First, to make a deepfake a lot of data is needed: the more diverse the data set that’s used, the more convincing the deepfake we can make. If we’re talking about still images, this means that for a quality fake original photos need to be shot from different angles, with different settings of brightness and lighting, and different facial expressions of the subject. Also, a fake snapshot would need to be manually fine-tuned (automation isn’t too helpful here).
Second — if you want to make a really indistinguishable fake, you need specialized software and lots of computing capacity; thus, you need a significant budget. Finding free software and trying to make a deepfake on your home PC will lead to unrealistic-looking results.
The abovementioned deepfake Zoom calls are adding to the complexity of the process. Here the bad guys need not only to make a deep fake, but to create it “online”, while maintaining high image quality without noticeable artifacts. Indeed, there are certain applications available that allow you to make deepfakes videostream in real time, but they can be used to make a digital clone of the pre-programmed person, not to create a new fake identity. And the default choice is usually limited to famous actors (because there are a lot of their images on the internet).
In other words, a deepfake attack is quite possible now, but such fraud is very expensive. At the same time, committing other types of fraud is usually cheaper and more accessible, so deepfake fraud is available to a very few cybercriminals (especially if we’re talking about high-quality fakes).
Of course, that’s no reason to relax — the technology doesn’t stand still and within a few years the threat level may increase significantly. There’ve already been attempts to create deepfakes using modern popular generative models, such as stable diffusion. And such models allow you not only to switch faces, but also to replace objects in the image with almost anything you like.
Ways to protect against deepfake
Is there a way to protect you and your organization from deepfake fraud? Unfortunately, there’s no silver bullet. We can only reduce the risk.
As with any other social engineering method, deepfake fraud targets humans. And the human factor has always been the weakest link of any organization’s security. So first of all, it’s worth educating employees about the possibility of such attacks — explain this new threat to your colleagues, show where to look to spot a deepfake, and maybe demonstrate and publicly analyze a few cases.
What to look for in the image:
- Unnatural eye movement
- Unnatural facial expressions and movements
- Unnatural hair and skin color
- Awkward facial-feature positioning
- A lack of emotion
- Excessively smooth faces
- Double eyebrows
It’s probably also a good time to strengthen your overall security processes. It’s worth implementing multi-factor authentication for all processes that involve the transfer of sensitive data. And maybe implement anomaly detection technologies that allow to detect and respond to unusual user behavior.
Also, deepfake fraud can be fought with the same tools that enable their creation: machine learning. Large companies such as Twitter or Facebook have already developed their own tools that allow detection of deepfakes, but, unfortunately, they’re unavailable to the general public. Still, it shows that the cybersecurity community understands the significance of the deepfake threat and is inventing and already improving ways to protect against it.