In the future, deepfakes will bring cyberattack scenarios to a whole new dimension. Since there are no mature technical defense mechanisms currently available, organizations must be extremely cautious and recognize the potential risks.
Deepfake videos first appeared on a large scale in 2017, with fake videos of Hollywood stars like Scarlett Johansson, Emma Watson and Nicolas Cage spreading online quickly. And even politicians were not spared by deepfakes, such as Angela Merkel who suddenly takes on the features of Donald Trump in a speech.
Deepfakes, derived from the terms deep learning and fake, refer to manipulated media content such as images, videos or audio files. The technologies behind it, such as artificial intelligence and machine-learning algorithms, have developed rapidly in recent times, make it almost impossible to distinguish between original and counterfeit content.
What’s concerning is that deepfakes are getting better, and it's getting harder and harder to recognize them. It won’t be long before we see their impact on businesses. The potential attack scenario ranges from taking on identities to blackmailing companies.
The following three deepfake-based attack methods are likely:
- C-Level fraud: It's the most prominent method. As a result, fraudsters no longer seek to persuade an organization’s employee with a fake email to transfer money, but a call that makes the caller sound the same as the CFO or CEO.
- Extorting companies or individuals: With deepfake technology, faces and voices can be transferred to media files that show people making fake statements. For example, a video could be produced with a CEO announcing that a company has lost all customer information or that the company is about to go bankrupt. With the threat of sending the video to press agencies or posting it on social networks, an attacker could blackmail a company.
- Manipulation of authentication methods: Likewise, deepfake technology can be used to circumvent camera-based authentication mechanisms, such as legitimacy checks through tools such as Postident.
In principle, however, all attacks based on an attacker posing as someone else virtually – for example, on the phone, email, or video message – can be enhanced by deepfake technology and become much harder to see. Thus, it is quite conceivable that deepfakes are also used for attacks on private individuals.
The danger that deepfakes bring will become much more significant in the future – at the latest in 2019 – as the machine learning methods used will be further optimized and the realization of deep fakes will no longer be a time and cost consuming challenge.
For example, video deepfakes can be created using tools freely available on the internet. All you need is a webcam for around EUR 80, a green screen for around EUR 90 and a graphics card for around EUR 1,000.
Even an audio deepfake is now easy to realize. In the past, a model had to be created using voice data of at least five hours in length. Today, there are publicly available tools that allow you to synthesize new voices based on an existing model with just one minute of audio.
So how can companies prepare for the rise in deepfake attacks?
Most companies do not yet know about the deepfake attack risks because it's a whole new type of attack under radar. One can only create awareness in the company that such attacks are possible. It also means saying goodbye to familiar truths – for example, the voice on the phone at the other end of the line might not also be the person that owns that voice.
Although there are currently no technical defense mechanisms available, programs to detect deepfakes are being worked on. In some cases, they are even just about ready for the market. Furthermore, NTT Ltd. is looking to develop capabilities with manufacturers of such applications in this area.
In the meantime, however, we are already working closely with organizations to support them through extensive security awareness training and interactive training courses. These address the importance topics around cybercrime risk, including CEO fraud, management hacks and, last but not least, deepfake threats.