Biometric Encyclopedia

Generative AI

Generative AI refers to artificial intelligence systems that can generate new data, such as images, audio, text, and synthetic media, rather than simply analyzing existing data. It uses machine learning models like Generative Adversarial Networks (GANs) and diffusion models trained on large datasets to produce novel outputs that mimic the patterns in the training data.

Common generative AI applications include creating deepfakes (e.g., face swaps or voice cloning), photorealistic images, music/art generation, and generating human-like text. While enabling new creative possibilities, the ability of generative AI to produce highly realistic synthetic media also introduces risks around disinformation, impersonation, and identity fraud that robust liveness detection must defend against.

Generative AI has revolutionized the field of synthesized media, enabling the creation of highly realistic face swaps and other similar content.

What’s the Difference between Deepfakes and Generative AI?

The term deepfake has historically been used to refer to synthetic imagery created using deep neural networks. In practice, generative AI and deepfakes are now often used interchangeably as there is a large overlap, but a deepfake more likely refers to a video, image, or piece of audio utilized for malicious purposes.

In contrast, generative AI encompasses output in any media (including text, such as language models [LLMs] such as ChatGPT) for any purpose. In short, deepfakes are a subset of generative AI.

What Can We Do to Combat Generative AI Attacks?

To detect synthetic media created using generative AI, verification technologies that utilize AI against AI are essential.

When used with biometric face verification, AI enhances the accuracy, security, and speed of the identity verification process. Users can remotely verify their identity by scanning an individual’s trusted document and their face. Deep learning models such as Convolutional Neural Networks (CNNs) detect and match the images. At the same time, liveness detection uses computer vision to ensure that the imagery is of a real person and not a non-living spoof, such as a deepfake, mask, or other synthetic media. 

Biometric face verification remains one of the most reliable and convenient methods to verify identity remotely and defend against AI-generated attacks. However, not all facial biometric technologies are created equal. Vendors must implement robust passive challenge-response mechanisms to ensure a remote individual is a real person and they are verifying in real time. To do this, science-based technology that leverages artificial intelligence is needed to confidently detect that an individual is a ‘live’ person (not a generative AI spoof) and that they are genuinely present and verifying in real-time. 

iProov’s passive challenge-response mechanism is Flashmark – the random nature of the technology makes it unpredictable, impervious to replay attacks (when threat actors inject previous authentication attempts to bypass the system), and improbable to reverse engineer. It is the only way to mitigate generative AI and digital injection attacks effectively by detecting genuine presence with high assurance.

Read more: