Request demo
en
Site language:
Biometric software product with robust user authentication for unlocking a device or gaining access to operating systems or apps
A biometric software product for increasing the level of security at the facility during video surveillance
A biometric software product for biometric identity verification in access control and management systems
A biometric software product for displaying personalized media content
A biometric software product with facial recognition for reliable identity check
A biometric software product with facial recognition that expands the electronic queue systems with personalized services for visitors
A software product that provides simple and reliable working time and attendance by identifying faces using tablets, IP cameras, or terminals
A biometric software product with facial recognition that provides enhanced communication with clients
A biometric software product that provides a reliable and quick check of the gym clients access right without employee’s participation
A new level of work with visitors and employees of Business centers opened with the help of biometric products
Biometrics for convenient service to citizens, including remote monitoring of the quality of personnel work
Biometric monitoring of working hours and additional security tools for industrial facilities
Modern methods of biometric analytics for safe operation of sports facilities
Convenient and secure transport solutions based on the digital identity of the passenger
Biometric solutions for a new level of security and interaction with visitors
Biometric video Analytics for targeted marketing and personnel control in distributed networks
Biometric products for proctoring and video surveillance systems in educational institutions
Keyless biometric access to rooms, targeted approach to each client and information about the time of work for employees
Necessary tool for the security and competitiveness of a modern Bank
Improving the level of security, speed of investigations and timely prevention of illegal acts in the urban public space
Customer-oriented solutions, acceleration of the work process of the registry area, control of the staff of the entire institution
RecFaces makes facial biometrics simple and applicable. We provide a wide range of ready-made biometric solutions for businesses to upgrade their security and technological efficiency.
We are ready for cooperation and sales through the partner network. To get advice on your project, please contact us by e-mail sales@recfaces.com
It is our principal and invaluable resource. Talented and energetic people of our team unite the like-minded ones which helps to expand expertise and company’s growth.
Join us!
We are always happy to answer all your questions. Contact us in any way convenient for you.
We share our long-term expertise
in the development of biometric software
Comprehensive information for your projects.
Just theory, practice and statistics
Discover our latest news and updates
on facial recognition technology
Find out more about
RecFaces company here

Running race: biometric technologies vs deepfake

Biometric technology vs deepfake — RecFaces

At the beginning of the year 2022 China introduced a bill regulating the distribution of video or audio recordings generated using face substitution or human voice.

The document was developed by the Cyberspace Administration of the PRC. The authors of the bill intend to prohibit the publication of various deepfakes if they infringe on the rights of people, disturb public order or provoke the spread of fake news. If the law is approved, China will become the first country in the world where the fight against deepfakes is carried out at the legislative level. What is the danger of deepfake technology and how modern biometrics opposes it — we find out with the RecFaces team.

Content

How to create deepfakes
Are the deepfakes dangerous?
Deepfakes and biometric identification: the struggle between good and evil
Protection Methods
Algorithm Series of Liveness Test
Multifactorial Authorization
Use of additional cameras
Protection of the biometric template base
Use of deepfake recognition algorithms in incident investigation

How to create deepfakes

Audio and video content synthesis technologies began to develop in the 1990s. However, the history of deepfakes in the modern sense began only in 2014, thanks to engineer Ian Goodfellow. While studying at Stanford University, he studied the possibilities of using neuronets to create artificial videos with images of real people. Up to some moment the method invented by Goodfellow was known to a relatively small number of engineers and developers. The turning point was 2017, when a user with the nickname “deepfake” posted several erotic videos on Reddit, where the faces of actresses were replaced by the faces of celebrities. The publication produced the effect of a bomb that exploded, and the author's nickname gave the new phenomenon. From this moment, technology began to develop by leaps and bounds. Soon, companies specializing in ready-made solutions for deepfake development entered the market. And in the long run, the technology was so simplified that its use became available to absolutely everyone. It is enough to install a thematic application like FaceApp, Zao, Reface.

To create deepfakes, you can use different models of neuronets. For example, the generative model Variational Autoencoder (VAE) was initially popular. It made it possible to transfer human facial expressions quickly to a computer model, but at the same time the “fake” could be recognized quite rapid. Therefore, the most common is generative adversarial network (GAN). The principle of its work is based on a continuous competition between two interconnected neuronets: a generator and a discriminator. The first creates new pictures, the second tries to solve — fake image in front of it or not. The more cunning the generator becomes, the more accurate and realistic the deepfake it creates.

Are the deepfakes dangerous?

At first glance, deepfakes may seem quite harmless entertainment. To make Elon Mask sing “And dream we not of the thunderous spaceport”, to make an advertisement with “not real” Bruce Willis and to attach grandmother's face to Michael Jackson after all — all this looks funny and curious. And certainly, does not make think of any danger. Indeed, today deepfakes are actively used in advertising, the entertainment industry, fashion, cinema. Industry research is carried out by the world's largest companies, such as Amazon or Disney.

However, there is a flip side of this coin. A significant part of deepfakes today is created not only for entertainment purposes. Discrediting famous people, blackmail (including using pornography), fake news, political provocations, financial scams — fraudsters have plenty of opportunities for the misuse of deepfakes. At the same time, not only celebrity, but also ordinary people, and even large businesses, are at risk. So, a deepfake attack on the head of a large American energy company attracted considerable attention in 2019. It became the first notorious precedent using speech forgery technology.

Synthesizing the voice of the boss, the fraudster called the CEO of the company and demanded to transfer immediately $243 million to a certain account. The audio recording was so well made that it transmitted even the smallest nuances of man's speech. There is no chance to suspect fraud. The victim met demands without even thinking.

Deepfakes and biometric identification: the struggle between good and evil

Deepfakes can also affect different areas of biometric technologies’ use. First of all, based on facial biometrics. So, a notorious fraud with a facial recognition system in 2021 was revealed in China. In 2018 two friends purchased high-resolution photos on the black online market and “revived” them using deepfake-applications. Then they bought several smartphones with a reflash, which allows you to use prepared video when identifying, and not an image from the front camera. Using that scheme, fraudsters for two years managed to deceive the identity verification system of the Chinese tax service and trade fake tax invoices. Damage to the treasury exceeded $76 million.

“The development of technologies, the availability of open algorithms and facial databases for training neuronets allows you to use biometric analytics in different directions. While some developers are creating applications to improve security in various areas, others are looking for ways to deceive biometric technologies. In terms of biometrics, the main threat of deepfakes is associated with the chance to impersonate. Face substitution can be carried out both using simple methods (printed photos of good quality, video on the phone) and using more complex schemes,” says Sergey Novikov, technical director of the RecFaces.

The main difficulty lies in the continuous development of deepfake technology. The first fakes were made rather clumsy and easily identifiable. Deepfakes were clearly recognisable by an unnatural manner of blinking, insufficient synchronization of the lips movement with speech, robotic intonations. But as soon as another gap was discovered in the technology, the intruders immediately threw all their forces to eliminate it. As a result, today it is often almost impossible to recognize a deepfake with “the unaided eye”.

Nevertheless, this technology still has restrictions that can be removed only with the help of painstaking manual processing. It is also important to understand that all algorithms for the synthesis of fake faces are based on the transformation of two-dimensional images. When forming a new image, key anthropometric points are shifting, their location relative to each other is broken, and the final “picture” ceases to correspond to the biometric pattern of a real person. Besides, additional control techniques that detect so-called unavoidable digital artifacts can be used to recognize deepfakes. For example, a fake face often has different shades of eye color or the distance from the center of the pupil to the edge of the iris.

Protection Methods

Modern biometric products offer several solutions to protect against possible fraud using a deepfake. Let's take a look at the main ones:

Algorithm Series of Liveness Test

Liveness algorithms make it possible to make sure that a real person is in front of the camera at the time of biometric identification, and not a photo or video from the smartphone screen. Two types of testing can be implemented in technology: passive and active. In passive mode, the program additionally tracks blinking, eye or lips movement. At the same time, nothing is required of the person himself. But during an active check, the system works on the principle of “question-answer”. For example, a user may be asked to do something: smile, wave his hand, turn his head. Liveness algorithms are provided in all software biometric products of RecFaces.

Multifactorial Authorization

Today, two-step or more authorization remains the main method of counteracting image substitution. The use of biometric data in conjunction with passwords and codes makes sure that the access is requested by a “real” user. This is particularly important in areas requiring special protection. For example, in banking transactions.

Use of additional cameras

Another way to combat facial forgery is to use several cameras, one of which has a vision function in the infrared spectrum. It is much more difficult to fake an infrared picture than usual one. In modern practice, such modules are equipped with most of ATMs and payment terminals, where compromising of a person poses the main threat.

Protection of the biometric template base

Biometric patterns are the most sensitive part of any biometric system. They are of no interest for fraudsters: it is impossible to restore the photo based on them. However, you can add a biometric template of a fraudster to the database, and then use it for unfair purposes. That scenario is unlikely, and yet it would be right to take care of the security of the biometric template base. For example, in biometric products of RecFaces user data and biometric templates are stored in separate independent stores in an encrypted form

Use of deepfake recognition algorithms in incident investigation

Modern biometric solutions can not only analyze video streams from cameras in real time, but also video files from archives, including identifying potential deepfakes on the records. For example, such a function is provided in the Id-Guard biometric solution for video surveillance systems from RecFaces. This opportunity is useful when investigating serious incidents.

Today, the development and improvement of deepfake protection technologies is one of the priority areas of global cybersecurity. So, DARPA, the Defense Advanced Research Projects Agency (USA), has been developing software to identify fake images in photos and videos for several years. Facebook and Microsoft annually hold the Deepfake Detection Challenge, during which thousands of developers test their methods of detecting deepfakes.

For biometrics the fight against deepfakes is a serious challenge, the answer to which will be an even more dynamic development of the industry. So far, opposition to biometrics and generated content technologies reminds a running race. The more sensitive and deep face recognition algorithms become, the more sophisticated the attempts of fraudsters to circumvent them are. However, every time it becomes more and more difficult for creators of deepfakes to look for “gaps” in biometric algorithms. According to the forecast, by 2024, the accuracy of identifying deepfake content recognition products will reach 70%, and by 2030 — 90%. It means, in the near future, the advisability of creating deepfakes for deceiving biometric algorithms will be in great doubt, because this process will become time consuming and complex.

Here you can rate our article
Here you can rate our article
Thanks!

Subscribe to our newsletter