Truth or lie: the truth about hacking biometrics
Biometric face identification is used to solve problems in various fields: banking, industry, transport, retail and other industries
But regardless of the field of application, the main goal of facial biometrics is to increase the level of physical security. In other words, technology plays the role of a kind of «lock.» And where there is a lock, there will always be a cracker who wants to test it for strength. For the sake of petty fraud, grand theft or just for the sport of it. Therefore, news continually appears in the media, how the next «tinkerer» managed to deceive the biometric algorithm. Is it true that facial recognition technology is so vulnerable to hacking? Let's figure out what the types of attacks on biometric systems are, and what the technology can oppose to attackers.
1. User Database Theft
Theft of a biometric database is one of the most common fears of users. And you can understand this fear, because biometrics are inalienable. Unlike a compromised password, we cannot change it.
One of the most high-profile incidents occurred in 2018 in India, where unknown attackers stole the database of more than a billion users of the national biometric system Aadhaar. Curiously, the fraudsters estimated access to the stolen information at only $8. It was for this amount that the journalists of the local newspaper The Tribune managed to buy the base. Moreover, the «deal» was carried out on WhatsApp.
In fact, most concerns about the breach of biometric databases proved groundless. Yes, you cannot change your biometrics in case of theft. But fraudsters often cannot use the stolen information. Modern biometric systems have a harsh depersonification of information for each data set and a distributed storage system. Photos of people, biometric templates, personal data — all this is stored in separate databases in encrypted form. The same biometric pattern is essentially just 1Kb of abstract data. The set of bits and bytes from which the neural network builds a vector in a multidimensional space. It is impossible to restore a person's photo or other information usung them. Even by stealing a biometric terminal, intruders will have to spend years decrypting databases. This is an incredibly time-consuming and resource-intensive task. The case when no end can justify the means.
What to look for:
It is generally accepted that the main threat of theft of biometric data is posed by those companies that work with them: banks, shops, business centers, etc. In fact, everything is much simpler. The main source of vulnerability is the person. Without noticing it, we constantly give our main biometric data: voice and face. For example, when we post selfies from vacation on the Internet, change profile pictures, read something on stories video on social networks or in popular messengers, even when we start a dialogue with phone scammers. Our social networks are a huge library of open data. And in the case of public people, collecting information is even easier: there are millions of high-quality photos or videos of actors, politicians or singers on the Internet. To get these data, it is unnecessary to hire a team of hackers, develop an algorithm for hacking the base of a bank, or spend time to decrypt. Therefore, before being afraid of «abstract» breaches and hacks, it is worth asking yourself: did I secure my data well.
Spoofing is an attack using fake data, where one person impersonates another at the time of verification. This element of the biometric system arouses the greatest interest among cybercriminals. Depending on the class and level of complexity of the algorithm, high-resolution photos (on paper or screen), paper or silicone masks, deepfake video are used for spoofing.
As soon as facial recognition algorithms appeared in smartphones, attempts to hack the technology were immediately fixed. So, in 2018, the performance of biometric protection was tested by a Forbes journalist. He created a plaster 3D model of his head and with it tried to unlock phones from different manufacturers: Samsung, LG and iPhone. Only Apple gadget managed to pass test.
Any biometric algorithm works on the basis of a neuronet. And no matter how «trained» it is, the likelihood of error (false denial of access to right people or false granting of access to someone else) still persists. However, every year neural networks are becoming more and more smart, and algorithms are of high quality. If a few years ago it was possible to deceive the system with a medical mask, false mustache or theatrical makeup, today such methods are simply useless. Including cases when it comes to the security of our gadgets. So, initially smartphones and laptops were equipped with sensors with 2D identification. They could really be «hacked» with a photo. But, step-by-step, 2D was replaced by technology using 3D facial imaging. Here, to deceive the system, you already need to create a high-precision deepfake or volumetric mask.
It is also important to understand that most of the flashy headlines in the media about the «unreliability of biometrics» concern, as they say, «everyday» hacks. Usually, it comes to smartphones, tablets or laptops. Does it mean that biometric algorithms in iPhone and Samsung are worse than in a bank? Well, yes and no. From an academic point of view, such systems are quite accurate. In general, if you look at the TOP-50 of biometric algorithms of the world according to NIST (National Institute of Technology Standards), then they will differ from each other by hundredths, or even thousandths of a percent. Another question is that these accurate algorithms need to be adapted to user needs and security standards. The bank has higher security requirements, so you can sacrifice the convenience of customers for it. And the clients treat understand it: with biometric authentication in a bank application, we are subconsciously ready to wait longer. For smartphone manufacturers the speed of verification is more important. So that users do not complain about the frozen phone when it will recognize their face for a long time. Therefore, biometric systems here are of the «mass market» level: for everyday use they are good, but they are mistaken more often than the more serious ones.
The accuracy of the system also depends on the conditions under which it is planned to be used. No algorithm can be deceived just by snapping your finger. Any attack requires serious training, a clearly developed methodology, certain competencies. But the main thing is solitude. You can experiment as much as you like with plaster heads, realistic 3D masks, makeup, etc. What, by the way, the developers of biometric solutions are doing as part of testing their own products. However, most methods are actually unrealizable in real conditions. Of course, you can try to go through the ACS at the checkpoint of the business center using a deepfake on a smartphone. But such a maneuver will be immediately noticed by other people and security officers. Another thing is to conduct remote banking operations. Here, the requirements for algorithms will be already higher.
- Surroundings Check Level
Deepfake technology is the invention of moviemakers. For decades it was used in the production of films, and rather recently was popularized. Anyone can create a deepfake today. Some special knowledge and skills are not required for this: it is enough to use one of the special programs. Creating a better deepfake will require additional investments. On average, the price ranges from 300 to 3,000 euros. But simply convincing the biometric system of the authenticity of such a model is not enough. That's only a third of the case. Modern protection against deepfakes is focused not only on face recognition, but also on the environment: textures, background light, glare, the presence of foreign objects. Even if you bring a high-class deepfake to the biometric terminal, which in its characteristics will coincide with the biometric template from the database, the system will still understand that It is not a living person, but a tablet with an image. Banal due to inconsistencies in the real background and background in the picture.
- Color Spectrum Level Check
On the Internet you can find a life hack: to deceive biometrics, you can use photos without an infrared filter (in night shooting mode) or print a photo without the use of black ink. However, in such a picture the face will be unnatural with whitish pupils and colorless eyebrows. Most modern biometric terminals easily recognize substitution. Therefore, this method is suitable only for very simple camera models with a certain spectral range.
In general, 90% of biometric terminals today are able to analyze images in several spectra at once: infrared, 3D-infrared, in ordinary visible. During the test, they compare how much the balance of the color image corresponds to the IR picture.
- Use of multiple biometric products
If we are talking about the physical security of the perimeter, then the use of several biometric products at the same time will help minimize the risks of damage from a possible hacking of the algorithm: in ACS, in video surveillance systems, and for authenticating users at a service computer. In this case, even if the fraudster manages to deceive the ACS and illegally enter the territory, biometric video surveillance in the background will still constantly check his right to access. And if it is violated, sooner or later the security service will receive an appropriate notification anyway.
- Multivariate verification
For maximum security, it is recommended to use biometrics along with other factors for checking a person's identity. Additional methods include passwords, SMS codes, PIN codes, access cards, identity documents, etc.
- Liveness check algorithms
The most important means of protection are also Liveness detection algorithms used in modern biometric products. Their task is to make sure that there is a living person in front of the camera. Today, there are a huge number of different variations of such algorithms on the market with a different approach to verification. For example, there is a contextual or frame analysis when a neural network checks the presence of a human face in natural conditions. If there is a frame around it from a phone or tablet. Texture analysis develop rapidly, when the system considers the position of shadows on the face and glare. There are two-factor verification tools that require additional sensors: infrared, ultraviolet or 3D. So, IR sensors level the entire visible spectrum of colors on a person's face, including makeup or even stubble. 3D lasers build a volumetric model on a person's face and analyze its accuracy. The popular method is online verification. In this case, the system invites a person to perform several actions: turn his head, smile, make a sad face, blink. This analyzes not so much the command execution as the naturalness and smoothness of the facial motility, as well as the uniformity of textures during movement. Online Liveness scan is the most commonly used one. At the same time, it has two significant drawbacks. Firstly, due to the popularity of the method, most deepfakes are aimed at deceiving it. And, secondly, complex checks are often inconvenient for users themselves. Therefore, now the operational regime is already outdated: it is replaced by more modern methods. For example, the mentioned above texture analysis.
3. Database forgery:
The most vulnerable «link» in any security system most often is a person. In the case of biometric identification, we are primarily talking about possible abuses by security officers.
You decide to enrich the access control system in your office with facial recognition. Choose high-quality software by a proven manufacturer with a top-end facial recognition algorithm, Liveness check and secure databases. But at some point, an unscrupulous security guard or reception employee conspires with a certain attacker and enters his biometric profile into the general database. As a result, the fraudster does not need to deceive the algorithm: he already takes a fraudster for «a right person».
Compliance with the principles of the Zero Trust security model will help minimize the human factor. Firstly, this is a rule of privilege: any employee gets access only to the set of functions and data necessary for his work, in accordance with the position and access rights of the employee. For example, only the head of the security service will have the right to create new profiles in the database, but not ordinary guards. At the same time, the operation will have to be confirmed by someone from the top management of the company. Secondly, it is a continuous identity verification of the authorized person. In this case, when issuing a pass or other important actions, the security officer will have to confirm his identity every time.
You can also use facial biometrics to delineate the access rights and for additional checks of the employees' identity. Read more about how biometric technologies are used to implement the Zero Trust concept in a special article on our blog.
Neural networks and biometric technologies in general are developing incredibly quickly. And although periodically information about biometrics scams still appears in the media, every year algorithms become less vulnerable to possible attacks. It is no longer possible to deceive them with a good photo or mask. It will take a lot more experience, effort and most importantly — money. Thus, the development of biometric systems calls into question the very financial viability of hacking and temporary resources.