Posted on

Deepfakes and fraud prevention in biometrics

Deepfakes and Fraud Prevention in Biometrics BioID Looking at biometrics and its relevancy regarding the internet and online presence, identity fraud is an important issue to mention. To date, most people do not have a clear understanding of the potential risks in today’s technologically advanced world. For that very reason it is essential to clarify and explain one of the terms associated with fraud: Deepfakes. This phenomenon is an example of a possible threat to peoples’ digital identities.

 

What are deepfakes?

Deepfakes are fake videos, photos and in a few cases even fake audios produced by an artificial intelligence where peoples’ faces, actions or voices are adjusted in a certain way. Often the line between reality and fakeness is therefore blurred leaving plenty of room for any kind of manipulation and fraud. Especially with the intent to deceive and defame their victims, deepfakes are turning into dangerous cyber weapons. Those victims include not only celebrities, or politicians but increasingly also private people.
 


With technology and software evolving constantly as well, so is the creation of such deepfakes with the extent of their spreading out of reach. Not only the quality of the forged videos or photos is getting better and more real over time but also the ease to create deepfakes with only limited image material available.

This deepfake video for instance was created by one of our employees with a free online tool using only one image. Can you recognize the video as a deepfake video? No? In a later paragraph, we’ll tell you how to do in the future.
 
 

Deepfake: meaning and origin

The term “deepfake” resulted from merging the words “Deep Learning” and “Fake” highlighting the usage of machine learning methods to create fakes almost autonomously. Due to their potential constructive content, there has been recent effort for change in the industry and in politics. The goal is to limit the usage of such faking software as well as to punish any kind of unauthorized creation of “face swapping” or “body-puppetry”. Not only the concern for data protection moves regulators to create new laws but also the threat to legal and democratic processes outside of the online world. In fact, the use of such deepfakes can be found in various fields of life: politics, arts and especially pornography, where users try to misrepresent certain people using their faces, voices or bodies on display to achieve criminal acts. Would you have known, that with 96 %, deepfakes are most prevalent in the porn industry?

 

Celebrity deepfakes and how to create a deepfake in general

The first deepfakes appeared online back in 2017 on the platform Reddit when a user created fakes showing celebrities’ faces on the actors in adult videos. Those deepfakes went viral along with the shared computer code to produce such manipulated videos or images. This led to an explosion of fake content all over social media with celebrities primarily being the initial targets. One of the most famous deepfakes out there is a video called “Synthetizing Obama” where Barack Obama seemingly calls his presidential opponent Donald Trump “a total and complete dipshit”.
 


Modern Technology allowed the creator of the video to modify existing audio footage and use lip-synching tools to create the illusion that it is real. In this case, multiple factors contribute to the success of such manipulated content, with its popularity on a continuous rise. Especially the believability of the rigged photos is an important component. Images and Videos are perceived as photogenic evidence and therefore reality, so only few people question its credibility.

 

Deepfake technology and how to detect deepfakes

The increasing accessibility of software and apps that offer services to create deepfakes is another contributing aspect. The most popular applications with functions to create deepfakes being FakeApp, ReFace or DeepFaceLab. Usually, the deepfake creation process works similar on all apps with the first step being the upload of two images or videos showing two different faces. The software then analyses the existing facial features and studies how to transmit those onto other photos or videos with the help of artificial intelligence.
 

Deepfake detection

With all that said, it is very important for people to question the credibility of a source online whether that is carried out through extended research on the topic or through detecting manipulated content themselves. Even though deepfake-creating-software is constantly getting better and more real, there are still a few ways to tell if a video or image is indeed real or fake.
 

How to detect a deepfake:

  1. Facial transformations:

    As most deepfakes are almost always facial transformations, that is where it can be the easiest to detect manipulation. Often the facial features and expressions as well as the contour lines around the face do not match when looking closely at the image. Furthermore, the skin and the signs of aging of the two people mixed in the created deepfake are generally not identical. Other characteristics of manipulation within facial transformations include unnatural eye movement or frequent blinking, unnatural teeth or a lack of emotion that does not fit the conveyed message.

  2. Body Poses:

    When it comes to “body-puppetry” on the other hand, it is essential to look at the body poses and posture of the people occurring in the video. Many times, the hair and lip movements are out of line, so the overall actions in the video indicate manipulation. Further clues can be blurring and misalignment of the edges along with inconsistent audio.

  3. Digital fingerprint of the creator:

    While superficial indicators are very often the decisive factor that a source is fake, details about the digital fingerprint of the creator can also reveal a lot of information about the origin and originality of a video. With the help of blockchain-based verification, a video creator can therefore prove the authenticity of the released content.

 


Overall, there is to say that detecting a deepfake is a very complicated and complex issue where people without detailed knowledge on the subject can only in very rare cases expose them with the human eye. In consequence, alternatives must be applied where biometrics can be a helpful starting point. You can test your knowledge in detecting deepfakes with the following deepfake video of our Business Development and Marketing Manager Ann-Kathrin Freiberg. Pay special attention to the hair, glasses and the transformation from her face to her neck.
 

Deepfake detection through deep learning

For professional deepfakes it is true that neither the human eye nor biometric systems can detect these with ease. Ongoing research is pursued for detecting a deepfake in photo and video material directly. As part of this research, BioID is engaging with a BMBF-funded consortium including multiple universities as well as the German Bundesdruckerei. With this project, the German BMBF (Federal Ministry of Education and Research) supports research for countermeasures to video manipulation and misuse. The target is to develop deepfake detection methods and determine the genuineness of photo and video material. The derived methodologies should generate trust levels that allow utilizing the decisions in court.

You can find more information about our project “FAKE-ID” in the press release.
 

While traditional approaches using handcrafted features will be developed and analyzed, the most promising techniques involve deep learning. Due to deep fakes originating from artificial intelligence, it seems that their detection needs to rely on the same methods.
 

Biometric fraud prevention

As more and more processes are moving online through digitization and COVID19, identity credentials need to be derived through trusted processes. Unsupervised identity verification gives way to fraud, and thus needs to incorporate sophisticated anti-spoofing. For both, face-to-face agent-based video verification, as well as fully automated identity verification, deepfakes become a growing challenge. The question therefore arises what can be done against deepfakes with the intent of fraud or identity theft.
 
Biometric-based identity proofing with liveness detection can be an approach to defend such attacks. It is therefore increasingly embraced by organizations striving for their customer’s data security. Companies like BioID provide biometric security mechanisms through software detecting identity fraud attempts. While ensuring secure applications which prevent virtual cameras and modified video streams as input, manipulated photos and videos like deepfakes can be exposed by using liveness detection and facial recognition.
 

Liveness detection for fraud prevention

Biometric

Presentation attacks

Face liveness detection is an anti-spoofing method for facial biometrics. Scientifically, it is called presentation attack detection (PAD). The core function of a PAD mechanism is to determine whether a biometric feature (e.g., a picture), was captured from a live person. State-of-the art ISO/IEC 30107-3 compliant liveness detection from BioID prevents biometric fraud through printed photos, cutouts, prints on cloth, 3D paper masks, videos on displays, video projections and more. Deepfakes presented at the sensor level (e.g. on displays) can be rejected through the same BioID methods, e.g. texture analysis and artificial intelligence.

 

Application level attacks

For fighting attacks at the application level, e.g. through the injection of modified camera streams, challenge-response mechanisms can be utilized to reject prerecorded videos/deepfakes. In the future, it will be possible even for non-professionals, to perform feature modification in real-time, projecting a deepfake face onto a live moving face. Therefore, the importance of secure applications that prevent attacks through virtual cameras, will increase even more.
Fortunately, BioID’s liveness detection methods prevent presentation attacks through deepfakes with the same level of assurance as any video, animated avatar and the like. The German company’s anti-spoofing is used worldwide by digital identity providers to secure AML/KYC compliant processes. Multiple independent biometric testing laboratories confirm the technology’s ISO/IEC 30107-3 compliance, even resulting in a customer’s FIDO certified solution. BioID Liveness Detection can be tested and evaluated at the BioID Playground.
In combination with a secure app that prevents any virtual camera access and blacklisting the most common virtual cameras, criminals can be prevented from harming individuals’ identities.

 

Positive Implications of deepfakes

Like already established, deepfakes can have a deeply destructive character when used for public defamation out of spite or revenge. However, the technologies and algorithms used are not only harming but can also be used for positive change. Visual Effects, digital avatars or snapchat filters are just a few examples of positive use cases for deepfakes. For society the advantages of deepfakes are however greater than that. In Education for instance deepfakes could be used to create a more innovative and engaging learning environment. Famous figures like JFK have been used for deepfakes in the past already to offer history lessons in schools. Another field where deepfakes are of interest is art and the film industry. Here, they are used to create synthetic material in order to tell captivating stories creating avatar-like experiences in some cases. David Beckham used deepfakes to deliver a globally relevant message to broaden its reach. In an effort to make people aware about the disease malaria, his voice was used to let him speak in multiple different languages.
 
All in all, there are various dimensions to deepfakes and while they can have positive impact, the negative consequences for individuals, politics and society overweigh. Especially because women are often the victims in this case, deepfakes also pose an important gender dimension. Protection women and their reputations should therefore be the primary concern for politicians to pass new laws. The legal route for victims is however tough as the perpetrators often act anonymously, so tracing the deepfakes back to the origin is almost pointless. For this reason, it is essential to tackle the problem at its route and try to prevent the distribution in the first place. Platforms like Instagram, where this expansion of fake content takes place, could be of help in taking action.
 
Supporting Biometrics and authentication algorithms is crucial in combatting the online threats. By this means, measures against a deepfake and other damaging online defamation attacks can be revealed and the trust in the digital world can be restored.
 
More information about our company as well as our products helping to prevent such criminal acts can be found here: https://www.bioid.com/liveness-detection/
 
Contact us for more information.

 

Contact

Kathrin Kellner
+49 911 9999 898 0
info@bioid.com