Skip to main content

Deepfakes as a cyberweapon

Manipulated videos are becoming increasingly difficult to detect. Find out here what this means and how to protect yourself.

Lauralie Mylène Schweiger, 14.11.2022
'Face swapping' means replacing a person's face with a different one.
'Face swapping' means replacing a person's face with a different one. © AdobeStock

Deepfakes are manipulated video files, or sometimes sound files. The term is a combination of the terms 'deep learning' and 'fake.' Artificial intelligence learns to imitate targeted people from existing material. As a result, a deceptively real-looking Tom Cruise can fascinate an audience of millions on TikTok – or actors from the fields of politics and business can become victims of this dangerous cybercrime weapon.

@deeptomcruise High and tigh then… eyebrows ✂️ #footloose ♬ Footloose - Kenny Loggins

Few barriers to getting started

The main danger is that deepfakes make live manipulation possible. Especially with video phone calls, it isn't necessary to first establish trust, as it is with phishing calls. A deepfake can be created by anyone who has the right software and enough processing power, a few video clips of the targeted person, and can then recreate them with background and lighting conditions that are as similar as possible.

Watch out for artifacts

Matthias Neu from Germany's Federal Office for Information Security (BSI) explains how to detect deepfakes. When it comes to videos or video calls, he draws attention to systemic (depiction) errors, so-called 'artifacts'. These can occur around the face when the target's head is placed on a randomly chosen body. "It often happens that face-swapping processes don't properly learn to generate sharp contours such as those found in the teeth or in the eye," Neu explains. On closer inspection, they appear slightly blurred. The less original data the AI knows, the more limited are the facial expressions and lighting of the imitated head. If you're not sure whether the video call you're holding is with a real person, you can ask them to tap their cheek or, if it's a phone call, to ring back.

Attack is the best defence

There are already forms of AI that can expose attacking AI systems. Several projects funded by the Federal Ministry of Education and Research are working on this. However, Matthias Neu reports that automated methods for detecting deepfakes are not yet ready for practical use. AI can only learn from attack methods that have already been encountered during training, but there are many different possibilities. In order to assess the threat potential, the BSI therefore analyses known methods. Detection systems need to be evaluated and automated detection further developed. The BSI also organizes lectures, publications and a topic page on the technology, Neu says, because: "One of the most important countermeasures is to educate people about the existence of these counterfeiting procedures."

© www.deutschland.de

You would like to receive regular information about Germany? Subscribe here: