August 2, 2023

Over the past couple of years, iProov witnessed a huge rise in a novel kind of face swap. This type is considerably more advanced than traditional face swaps – they’re three-dimensional and more resistant to established detection techniques. Our Threat Intelligence Report uncovered that the frequency of this novel threat grew exponentially – by 295% from H1 to H2.

Bad actors are using sophisticated generative AI technology to create and launch attacks in attempts to exploit organizations’ security systems and defraud individuals. Accordingly, we believe that awareness and understanding of deepfake technologies must be expanded and discussed more widely to counter these efforts. Without insight into the threat landscape, organizations will struggle to employ the appropriate verification technology that can defend against them.

This article explains what face swaps are, why they’re so uniquely dangerous, and discusses solutions to the growing threat.

What Are Face Swaps?

Face swaps are a type of synthetic imagery created from two inputs. They combine existing video or live streams and superimpose another identity over the original feed in real-time.

The end result is fake 3D video output, which is merged from more than one face.

To summarize, face swaps can be boiled down to a 3-step process:

  1. Input one is video of the attacker
  2. Input two is video of the target identity they are trying to impersonate
  3. Software amalgamates the two above inputs into one final output – i.e, a falsified 3D video of a targeted individual.

A face matcher without adequate defenses in place would identify the output as the genuine individual. The end result will look a little something like this:

 


A face swap attack refers specifically to using the above synthetic imagery alongside chosen deployment methodology (such as man-in-the-middle or camera bypass) to launch a targeted attack on a system or organization.

Why Should You Be Concerned About Face Swap Attacks?

Criminals can use face swaps to commit crimes such as new account fraud, account takeover fraud, or synthetic identity fraud. You can imagine how effective a face swap could be during an online identity verification process, as a fraudster can control the actions of the outputted face at will. Face swaps are especially unique because they can be utilized in real-time.

Picture this: a fraudster needs to pass a video call verification check. A traditional pre-recorded or 2D deepfake would be useless here, because it couldn’t be used to answer questions in real-time. However, using a live face swap, a criminal could complete the video call verification by morphing their own actions (and even speech) with another input of the genuine person they’re pretending to be – ultimately creating a synthetic output to fool the verification process.

Let’s consider a few additional issues associated with novel face swap attacks:

  • Face swap attacks are increasing in frequency: Up 295% from H1 to H2 in 2022 alone. This growth rate indicates that low-skilled criminals are gaining access to the resources necessary to launch sophisticated attacks.
  • Crime-as-a-Service is growing: The availability of online tools are accelerating the evolution of the threat landscape. This is enabling criminals to launch advanced attacks faster and at a larger scale. If attacks succeed, they rapidly escalate in volume and frequency as they are shared amongst established crime-as-a-service networks or on the darkweb, amplifying the risk of serious damage.
  • Manual intervention is no longer effective: Although 57% of global consumers believe they can successfully spot a deepfake, one study found that only 24% of people can. And we would expect the results of such a study to vary wildly depending on the quality of the deepfake and the specialized training of the individual; many today are genuinely indistinguishable to the human eye.

How Are Face Swaps Attacks Launched?

Face swap attacks are delivered by digital injection techniques in order to try to spoof a biometric authentication system.

A digital injection attack is where an attack is injected into an application or network server connection, bypassing the sensor entirely.

A recorded video could be held up to a camera (which is referred to as a presentation attack), but we would not classify this as a face swap. A face swap is the use of any number of applications to apply a false digital representation of a person’s face (in whole or in part) and overlay it on that of the actors. This is done by digital injection.

Digital injection attacks are the most dangerous deployment method because they are a highly scalable and replicable form of attack. While Presentation Attack Detection is accredited by organizations such as NIST FRVT and iBeta, no such testing exists for the detection of digital injection attacks – so organizations are advised to do their own research on how vendors mitigate against this growing attack methodology and keep users safe on an ongoing basis.

Biometric Face Verification Solutions Must Defend Against Face Swap Attacks

As more and more activities move online and digital transformation and digital identity projects mature, the need for strong user verification and authentication is only set to grow in importance.

The truth is that traditional verification methods have failed to keep users safe online. You cannot trust data alone as confirmation of who someone is. Passwords can be cracked, stolen, lost, or shared. OTPs can be intercepted. Video call verification can be spoofed and rely on manual judgment, which can no longer reliably distinguish between genuine and synthetic imagery.

So, biometric face verification has emerged as the only secure and convenient method of verifying user identity online. However, the crux here is that not all biometric face verification solutions are created equally.

Biometric solutions are differentiated on how successfully they can establish liveness and provide an inclusive user experience (concerning age, gender, ethnicity, cognitive ability, and so on.) For more information on liveness and the different biometric face verification technologies on the market alongside their key differentiators, read our Demystifying Biometric Face Verification ebook here.

As we’ve highlighted in this article, there are serious growing threats to security (as with any identity assurance technology). When choosing a biometric solution, you must be aware of the challenges around security to be able to employ the appropriate verification technology.

Let’s consider a few of the key factors to specifically consider when choosing a biometric vendor that defends against face swaps:

  • Ongoing and evolving security: Given the transformative nature of generative AI, and the scalability of digital injection attacks, biometric security must learn from threats on an ongoing basis and be actively managed 24/7.
  • Passive authentication: Our threat report found that active, motion-based verification systems – which test for motions such as smiling, nodding, and blinking as indicators of liveness – were more frequently targeted by face swap attacks. This is because advanced synthetic imagery attacks, such as face swaps, can perform actions in real-time, circumventing active systems and making them more susceptible to the growing threat. So passive biometric systems are recommended.
  • Digital injection attack protection: While many liveness detection solutions can detect presentation or replay attacks that are presented to the camera, most cannot detect attacks that have been digitally injected into a system. Digital injection attack mitigation is essential.

Novel Face Swap Attacks and Biometric Face Verification: A Summary

The technology to create deepfakes is continually getting better, cheaper, and more readily available. That’s why deepfake protection will become more and more crucial as the deepfake threat grows, and more people become aware of the dangers.

Organizations need to evaluate their verification solutions for resilience in the face of complex attacks such as face swaps. For more information about face swaps and the evolving threat landscape, read our latest report, The “iProov Threat Intelligence Report 2024: The Impact of Generative AI on Remote Identity Verification”. Inside, we illuminate the key attack patterns witnessed throughout 2023. The first of its kind, it highlights previously unknown in-production patterns of biometric attacks, helping organizations make informed decisions on which technology and what level of security to deploy. Read the full report here.