October 28, 2024

Are your remote users real people? Or are they patched together from stolen or falsified information and brought to life using AI?

Synthetic identity fraud (SIF) — aptly nicknamed “Frankenstein Fraud” — has emerged as one of the most terrifying threats facing financial services and governments today. Like Mary Shelley’s fictional creation, these identities are stitched together from stolen parts. Instead of body parts, criminals use fragments of stolen personal information to create identities that walk among us undetected.

Verifying whether synthetic identities are real is difficult enough in the first place; fraudsters are usually wise enough to use those whose social security numbers are more likely to fly under the radar: children, recent immigrants, elderly individuals, incarcerated people, and even more terrifyingly, dead people.

In recent years, fraudsters have added a frightening ingredient: generative AI & deepfake technology. These technologies breathe life into these fake identities, creating realistic digital personas with convincing voices and faces. The result? Complete reanimation – an identity with a convincing voice and face to go with it.

These are often extremely complex crimes, and traditional fraud detection models are ill-equipped to tackle them. Silver-bullet-level technologies are required to mitigate them as early as possible.

Understanding Synthetic Identity Fraud

Synthetic identity fraud involves creating identities from stolen, fictitious, or manipulated information to deceive organizations. Unlike traditional identity theft, where criminals steal or misuse an existing person’s identity, SIF creates entirely new blended personas that are harder to trace and detect.

This modern horror story is the fastest-growing type of fraud in the world and has overtaken traditional identity theft:

Organized crime rings exploit synthetic identity fraud to take advantage of vulnerabilities in systems, posing significant risks to both financial institutions and government programs.

Key target industries are government public services and banks – though the credit sector flags the highest volume of synthetic identities.

How the Horror Spreads: Why Traditional Detection Fails

85% of synthetic identities go undetected by traditional fraud models. Unlike traditional fraud, where stolen identities trigger alerts, SIF often bypasses standard detection systems because the data used appears legitimate. Since no actual person’s account or identity is compromised, organizations can’t rely on victims to report it. The key to combating SIF lies in biometric liveness detection, which verifies if someone is a real, live individual, ensuring real-time authentication and reducing fraud risk.

Synthetic identity fraud is attractive to criminals because combining real and fake information makes detection difficult, and even when discovered, tracking the true perpetrator and recovering losses is highly challenging – often taking years to uncover.

A chilling technique known as ‘piggybacking’ allows fraudsters to link synthetic identities to legitimate customers’ credit accounts. This allows the synthetic identity to build credibility before launching its attack. The synthetic identity can then start opening its own credit lines, which fraudsters then exploit before disappearing. This technique underscores the challenge of detecting synthetic identities that mimic legitimate credit behavior, often raising no red flags until it’s too late.

The Evolution: How Synthetic Identities Come To Life With Generative AI and Deepfake Technology

The rise of generative AI has turbocharged synthetic identity fraud. The ease of creating highly realistic synthetic images and voices makes these personas more convincing during onboarding and security checks. This isn’t just about forged documents anymore — it’s about entire identities crafted from digital deception.

The factors fueling SIF are not diminishing. In 2022, 1,774 organizational data compromises exposed the PII of over 392 million individuals globally. This PII, obtained through cybercriminal activity, melded with generative AI tools, creates sophisticated synthetic identities that are becoming all the more believable. These breaches give criminals a head start, enabling them to use existing data in combination with AI to execute scalable attacks, such as credential stuffing​. At the same time, deepfake technology is getting more and more sophisticated and lifelike, compounding the threat.

Organizations can no longer rely solely on data integrity; they must implement stronger verification with liveness detection measures to confirm whether the individual behind the data is real.

How Can Biometric Liveness Technologies Spot if Synthetic Identities are Really “Alive”?

Synthetic identity fraud can bypass traditional security checks, especially when speed is prioritized. Effective detection involves biometric face verification, where users scan their government ID and face, ensuring the person matches the claimed identity. Liveness detection, a key capability in advanced biometrics solutions, is essential to counter advanced spoofing attempts, including deepfakes and digital injection attacks.

Advanced liveness detection is able to establish the “genuine presence” of an individual in real-time, which prevents spoofing with photos, masks, or deepfakes. Additionally, some cloud-based systems offer continuous threat detection and response to stay ahead of evolving threats, all while maintaining a smooth user experience.

A key resource for organizations looking to evaluate vendors that can provide SIF mitigation solutions is the U.S. Federal Reserve.

Synthetic identity fraud thrives on organizations accepting a ‘truth’ built on lies. As Mark Twain wrote, “Fiction is obliged to stick to possibilities; Truth isn’t”. Identity verification backed by genuine presence liveness technology exists to find the truth inside a presented identity – that a face is real and alive.

The scalability and accuracy of biometric solutions can be the difference between stopping a fraud attempt in its tracks or suffering significant financial losses. Against a backdrop of growing fraud, reliance on remote identity, and the accessibility of AI and synthetic imagery – science-based biometric technology will only become more and more indispensable in the struggle against SIF.

Real-World Haunting: A Cautionary Tale

Consider the case of Adam Arena, who with his co-conspirators, created a network of synthetic identities to steal over $1 million from banks. They nurtured these false identities for years, building legitimate credit histories before “busting out” – maxing out credit limits and vanishing without a trace. The scheme was so successful that Arena repeated it, targeting the U.S. government’s Paycheck Protection Program during the pandemic.

Trick or Treat? Prevention Over Cure in the Fight Against Synthetic Identity Fraud

Synthetic identity fraud is expected to become an even larger monster. Traditional security measures — passwords, OTPs, and even device-based biometric verification — are ineffective. Fraudsters are evolving, using AI to create identities that look like the living but carry the soul of deception.

To stay ahead, financial institutions must adopt advanced biometric solutions with genuine presence detection. By identifying and stopping synthetic identities at the point of account creation, these technologies offer the best defense against a growing threat.

In the words of Gartner, “Liveness detection technologies are becoming critical for defending against deepfakes and verifying the genuine presence of an individual”, in turn combatting synthetic identity fraud. Embracing resilient identity verification solutions is not just a recommendation – it’s a necessity. Once onboarded, synthetic identities are extremely difficult to remove.

iProov supplies biometric face verification technology to the world’s most security-conscious organizations. We are particularly well-equipped to combat synthetic identity fraud bolstered by generative AI technology.This Halloween, remember: the most dangerous monsters aren’t supernatural — they’re the synthetic identities lurking in your verification systems.

If you’d like to see how iProov’s technology can bring effortless security to your onboarding and authentication processes – while helping to combat synthetic identity fraud – book your demo here.