Deepfake: everything you need to know about what it is & how it works

Over the past few decades, AI has been developing rapidly. It has applications in various fields for countless purposes. It improves the quality and efficiency of many industries. But not all AI products have a positive impact on society. Sometimes technologies are created that are later abused by criminals. An example of such technology is known as Deepfake.
Programs based on AI are trained to replace a person's face with that of another individual, either in an image or in a video. The victims of such actions are very often famous people, celebrities, and politicians. This article will explain what this technology is based on and which methods exist to detect it.
Table of Contents
- What Is a Deepfake?
- How Does Deepfake Work?
- Autoencoders
- GANs
- How Dangerous Are Deepfakes?
- How Does Deepfake Affect Our Society?
- Is Deepfake Legal?
- How Are Deepfakes Used
- Blackmail
- Politics
- Art
- Acting
- Movies
- Social media
- Sockpuppets
- How to Detect Deepfakes
- How Do We Fight Deepfakes Today?
- Can Biometrics Help with Deepfakes?
- Summary
- FAQ
- How is Deepfake made?
- Who created Deepfake?
- How can you tell If It is a Deepfake?
- What is Deepfake in AI?
- How many Deepfakes are there?
What Is a Deepfake?
The word 'Deepfake' is quite clearly a combination of two everyday words, “deep” and “fake”. Deep refers here to the AI technology involved, which is known as deep learning. Deepfake technology is used in synthetic media to create falsified content, replacing or synthesizing faces, speech, and manipulating emotions. It is used to digitally imitate an action by a person that he or she did not commit. The first steps towards this technology were made in the 90s by academic institutions and were later adopted by a wider audience. Although the creation of deepfake programs is not mainstream, the concept has nonetheless managed to make a lot of noise in the media space. In the next section, we will explain how deepfake content is developed.
How Does Deepfake Work?
There are several techniques for creating deepfake software using machine learning algorithms. Simply put, these are algorithms that can generate content based on the data input. If a task is set for a program to create a new face or replace a part of a person's face, it must first be trained. The program is fed a vast amount of data which it then uses to learn to create its own, new data. Primarily they are based on autoencoders and sometimes on generative adversarial networks(GAN). Now let's find out what these methods are and how they work.
Autoencoders
Autoencoders аrе a family of self-supervised neural networks, mainly based on dimensionality reduction, that learn to copy their own input. They are data-specific, which means autoencoders can compress data similar to what they have been trained on. Besides this, the autoencoder's output will not be identical to its input. An autoencoder consists of 3 components: an encoder, a code, and a decoder. The encoder compresses the input data and produces the code after the decoder reconstructs the input based only on the code. There are various types of autoencoder: denoising autoencoders, deep autoencoders, contractive autoencoders, convolutional autoencoders, etc.
GANs
Gan is an approach to generative modeling from the input dataset. These learn from input data in order to generate new data. The system is trained by two distinctive neural networks: a generator and a discriminator. The generator discovers regularities or patterns in the input dataset and learns to reproduce them. The generated data is sent to the discriminator coupled with real data for evaluation. The purpose of the generator is to fool the discriminator. You need to train the system until the discriminator no longer confuses the generated data with the real data. The harder the generated data is to distinguish from the real data, the better the system is trained. GANs are harder to train and require more resources. They are more often used for generating photos rather than video.
How Dangerous Are Deepfakes?
Deepfake is considered to be one of the most dangerous uses of AI. Мost of its real-world applications have either discrediting or fraudulent intentions. One of the first cases of fraud involving deepfake occurred in the UK. Scammers called a UK-based energy company's CEO and, faking the voice of his German boss, ordered him to transfer €220,000 to a 3rd party bank account.
The end result of a deepfake may be indistinguishable from reality. It can undermine a person's reputation or cause someone to say or do something unnecessary. It is the perfect weapon in the hands of scammers. Creating a system that recognizes deepfake takes time. These programs also need to be trained.
How Does Deepfake Affect Our Society?
Falling into the wrong hands deepfakes can lead to chaos and uncertainty. In 2018 a video was uploaded to WhatsApp, where two men on a motorbike kidnapped a child in India. The release of the video was followed by a mass panic in the population, claiming the lives of several people.
Our society is built on the authority of leaders, stars, and influencers. Misinformation with their participation also can change the mood of the masses and encourage action. And things like this have already taken place. On multiple occasions, deepfake porn videos featuring celebrities appeared on the internet.
The topic has even engaged several US presidents. Fake news involving political leaders can undermine the reputation of the country's ruling power and lead to a loss of confidence in them.
Is Deepfake Legal?
Since deepfake only began to spread in the last few years, laws regarding its application have not yet caught up with the technology. In many countries, it is not regulated at all. Оne of the countries where there is a law on the use of deepfake is China. The Cyberspace Administration of China announced that fake news created using deepfake is illegal. In the US, almost all states have laws regarding deepfake porn. The other bill prohibits deepfake content that affects candidates running for public office.
How Are Deepfakes Used?
The spread of deepfake began with fake porn videos. Many female celebrities fell victim to deepfake porn. Among them were Daisy Ridley, Jennifer Lawrence, Emma Watson, and Gal Gadot. The topic also affected women associated with the leaders of different countries, such as Michelle Obama, Ivanka Trump, and Kate Middleton. In 2019 a desktop application called DeepNude was released. It was capable of removing clothing from women. Sometime later, it was removed, but you can still find copies of the app floating around in cyberspace.
The next group of people affected by deepfake is politicians. There were videos of President Obama insulting President Trump. In another video, Nancy Pelosi's speech was processed to make the audience believe that she was drunk. In another video, President Trump was shown to be mocking Belgium for its membership in the Paris Climate Agreement.
It should be noted that there are also cases of legal use. You can find plenty of apps for face swapping in photos and videos in the App Store and Play Market. There is an opinion that deepfake is the future of content creation. The South Korean channel MBN used deepfake to replace its news anchor.
Blackmail
Deepfake is a very sinister tool in the hands of those who seek to blackmail or denigrate others. Our society, especially the older generation, is not yet fully aware of the existence of technology that can replace a face in a video. Raffaela Spone from Pennsylvania shared deepfake videos of members of a cheerleading group, naked, drinking, and smoking to eliminate her daughter's competitive rivals. Fortunately, the families of the victims contacted the police, and her plan was thwarted.
Politics
As already mentioned, politicians are among those who most often become the subject of deepfake videos. Compromising videos involving candidates in elections can cause serious damage to a candidate's rating.
In 2019, media artists Francesca Panetta and Halsey Burgund from MIT created a deepfake video starring former US president Richard Nixon. In this video, Nixon announced the failure of the Apollo 11 program, stating that none of the crew returned from the moon. This was a real speech prepared for such a scenario. They worked on this video for six months with a group of different specialists. The aim was to create material as similar as possible to the real thing to show the potential of deepfake technology. Their website provides interactive educational resources that will help users understand what deepfake is and how to identify it.
Art
One popular usage of deepfake in art is making famous portraits talk. International researchers did this with Da Vinci's Mona Lisa. The Dali Museum in Florida recreated their namesake out of archival video footage to attract visitors.
Acting
In films, it is quite often necessary to make up or change the face of the actors. In «Rogue One: A Star Wars Story», deepfake was used to recreate the faces of Princess Leia and Grand Moff Tarkin.
Movies
Disney Research Studios is working on its own deepfake visual effects technology. It will significantly reduce the time and financial costs spent in obtaining the desired individual. The problem is that it only produces high quality in low resolutions. To get a satisfactory result of high quality requires much more effort. This is the same technology used in the Star Wars movie mentioned above.
Social media
While some social media sites like Facebook and Twitter fight against deepfake and ban synthetic media by updating their policies, others are embracing it. Snapchat has featured face-swapping camera features since 2016. TikTok adopted a technique that lets its users swap faces in videos.
Sockpuppets
Earlier in the article, we mentioned how а South Korean channel used deepfake to replace its news anchor. This technology has been used for the same purposes but with a fully generated character. One of these characters was Oliver Taylor, a student at the University of Birmingham. His social media profiles showed that he grew up in a Jewish family and was actively involved in anti-Semitism. He attracted the attention of the London academic Mazen Masri. Taylor accused Masri and his wife of sympathizing with terrorists. It was not revealed who was behind this profile, as the network did not uncover enough information. In 2017 a group of researchers and entrepreneurs founded Synthesia, an AI-powered software capable of creating audiovisual content for synthetic media. It allows its clients to create realistic videos with fictional characters.
How to Detect Deepfakes
Technologies that recognize deepfake are also built on artificial intelligence using algorithms similar to those used to create the deepfakes themselves. They detect signs that wouldn't be on real photos or videos. In the beginning, an unusual blink of the eyes or their complete absence was a good sign of a deepfake. But over time, systems have learned to fake eye blinking. Signs include:
- Unreal skin color or shifts in skin tone;
- Jerky movements;
- Poor synchronization of speech with lip movement;
- Faces or figures of the person that are blurrier than the background;
- Lighting problems;
- Additional pixels in the frame.
How Do We Fight Deepfakes Today?
Several leading technology companies are already developing their own solutions against deepfake content. Microsoft and Google provide datasets using which developers can train their systems to detect deepfakes. Facebook partnered with Microsoft, Amazon Web Services, and leading universities worldwide to announce the Deepfake Detection Challenge to develop deepfake video detecting solutions. Тhe prize for the winner of the competition was $500,000. Materials for the work, which involved approximately 3,500 actors, were open-sourced so that other researchers could use them. Within the framework of the MediFor project, DARPA has signed contracts with SRI International aimed at creating programs that will be able to find altered photos and videos. Sensiti provides its own solution for companies that want to protect themselves from deepfake technologies. Operation Minerva is a system that finds deepfake porn in popular porn sites and sends a notice to remove the content.
Can Biometrics Help with Deepfakes?
Biometrics are the physical characteristics of our bodies. Knowing the biometrics of a human allows us to identify the individual. Тhe key to this using biometrics to uncover deepfakes may be in behavioral biometrics and another technology that is based on AI: facial recognition.
Summary
Deepfake is a very young and promising technology. Humanity is still getting acquainted with it and has not yet found its full application within our society. Like many technologies, it has its advantages and disadvantages. It could harm or improve our world. We will need time to understand how to make the most out of it in different industries. Over time there will be many ways to control it, as there have been for other innovations in the past. Here's what you need to take home from this article to understand this technology properly:
- What is a deepfake, and how does it work?
- What are the technologies behind deepfake?
- What threats could it carry, and how might it affect our world?
- Is it necessary to regulate it by law and to restrict its usage?
- How can we detect its usage and fight against its detrimental usage?
FAQ
How is Deepfake made?
A deepfake is made using AI technologies. A program is trained to replace or synthesize faces, speech and manipulate emotions. It is used to imitate an action by a person that he or she did not commit. There are programs created specifically for this purpose and programs with a wide range of functionality, including those that generate deepfake content.
Who created Deepfake?
Deepfake doesn't have a single concrete inventor. It was developed by researchers at academic institutions and later adopted by the wider masses. There are experts who are known for creating deepfakes, such as Chris Ume, who created a deepfake starring Tom Cruise.
How can you tell If It is a Deepfake?
Some are made very poorly. In this case, it is easy to assume the use of deepfake without specialized software. If it is difficult to determine without the help of a tool, there are technologies available based on AI algorithms. These programs are trained to recognize inaccuracies in visual content. We suggest you get acquainted with the simulator, where you can check how well you are able to recognize deepfakes.
What is Deepfake in AI?
Deepfake is a technology based on AI's deep learning algorithms. Primarily those techniques are autoencoders and generative adversarial networks. They work on different algorithms, but the goal is the same. They analyze large amounts of data and, based on this, learn to generate similar data.
How many Deepfakes are there?
No one can say for sure. In October 2019, CNN published an article stating that there are at least 14,678 deepfake videos online. At the moment, no technology can uncover all the deepfake content on the Internet.