August 20, 2024

In an era of widespread remote hiring, a recent high-profile incident has highlighted the growing threat of sophisticated identity fraud in the workforce. A major cybersecurity firm, KnowBe4, fell victim to an elaborate scheme where an individual used stolen identity information and AI-enhanced imagery to secure a remote IT position. 

This case underscores the urgent need for advanced identity verification measures for remote workforce onboarding– a challenge that iProov’s cutting-edge biometric technology is uniquely positioned to address.

The Incident: When Traditional Hiring Protocols Fail

KnowBe4, who specialize in security awareness training, discovered that a remote software engineer they had recently hired was actually a threat actor from North Korea using a stolen U.S. identity and an AI-enhanced photograph. Despite implementing thorough hiring protocols including video interviews, background checks, and reference verifications, the deception was only uncovered after the new hire began loading malware onto company devices.

This attack was highly sophisticated. The hacker used a valid but stolen U.S. identity and an AI-enhanced photo derived from stock imagery to pass the company’s hiring protocols. They even had the workstation shipped to an address used as an “IT mule laptop farm” and accessed it via VPN to simulate working U.S. business hours.

This incident demonstrates the potential for catastrophic security breaches and highlights the limitations of traditional hiring and verification processes in the digital age – dangers iProov has been warning against since our “Work From Clone” campaign over two years ago (read more on deepfake working scams here).

The Deepfake Threat Is More Than A Trend

Deepfakes are AI-generated videos or images showing people saying or doing things they never actually did, posing a significant challenge to traditional identity verification methods.

While our focus has been on deepfakes or manipulated imagery in professional settings, identity spoofs and deepfakes are prevalent in other areas too. Consider these real-world examples:

  1. Politics: Deepfakes are becoming a powerful tool in political disinformation campaigns, with potential implications for global democracy and election integrity; the deepfake era of politics is already here. Recent high-profile cases include a synthetic Biden urging New Hampshire residents not to vote in an upcoming primary. An AI-generated image of an explosion at the Pentagon even caused a brief dip in the stock market.
  2. Social media: Instagram and TikTok have seen a rise in “digital avatars” – AI-generated influencers that don’t represent real people but engage with followers as if they were human.
  3. Dating apps: There have been increasing reports of “catfishing” and romance scams using AI-generated profile pictures and even real-time deepfakes, making it harder for users to distinguish between real and fake profiles. This is why many apps such as Tinder introduced identity verification measures.
  4. Financial/Cryptocurrency scams: Deepfake videos of well-known figures like Elon Musk have been used to promote fraudulent cryptocurrency schemes – as highlighted in the video below:

These examples highlight how deepfakes and identity spoofing technologies are being used across various spheres of life. By being aware of these broader trends, we can better prepare ourselves to identify and protect against potential deepfake threats in the workforce.

This particular element of workplace deepfake fraud is part of a larger trend that the FBI publicized in June 2022, where cybercriminals use synthetic identities – usually the combination of deepfakes and stolen personally identifiable information (PII) –  to apply for remote work positions. 

Ajay Amlani – SVP, Head of Americas at iProov– stressed the sophistication of these attacks in a recent interview with IT Brew: “You could alter the way that you look while you’re on a call to be able to represent yourself as anyone you’d want to represent yourself as – a male, as a female, a 12-year-old, a 46-year-old, a different country, different ethnicity.”

This threat is now more pressing than ever and requires mission-critical security that ensures a given person is the right person, a real person, interacting in real time. 

The Motivations Behind These Attacks

While a monthly paycheck is sometimes the obvious motivation, there can be a far more sinister goal. Amlani clarifies: “By securing a tech role within the company, the attacker then has access to customer PII, financial data, corporate IT databases and/or proprietary information.”

Cybercriminals can use this access to:

  • Hold the company to ransom
  • Carry out further cyberattacks
  • Steal intellectual property
  • Sell sensitive data on the dark web

What makes these incidents particularly concerning is scalability; deepfakes are quickly reproducible (particularly with advancements in generative AI), which means it’s easy for the criminal to repeat the same attack over and over again. Gaining employment in a number of different organizations could lead to significant financial gains to the criminal or their criminal organization as well as substantial harm to the employing organizations.

The Limitations of Current Verification Methods (Why We Can’t Rely on Our Eyes Alone)

Many organizations rely on video interviews as part of their remote hiring process. However, this method is increasingly unreliable for detecting deepfakes. As iProov’s research shows, “With a simple plug-in, attackers can create what’s called a ‘real-time deepfake”; the video can then be streamed into video conferencing calls communication channels. This was also the attack methodology used in the $25 M Arup zoom-based deepfake incident

Moreover, the human ability to detect deepfakes is almost non-existent with the most sophisticated deepfakes. An iProov survey found that 57% of respondents believed they could tell the difference between a real video and a deepfake. However, this confidence is misplaced. Studies have shown that humans are overwhelmingly inept at distinguishing real faces from deepfakes, with one study finding that only 24% of subjects could detect well-made deepfakes.

The Alan Turing Institute corroborates, that it’s now “increasingly challenging, potentially even impossible, to reliably discern between authentic and synthetic media”.

Human inspection and video call interviews are not a viable solution – science-based biometric liveness detection is essential.

iProov’s Solution: Science-Based Biometrics

The very nature of remote onboarding and working creates physical distance between the employer and the employee. They may never meet in person, making it increasingly difficult to verify that someone is who they say they are when applying for a role.

The challenge of remote identity verification also encompasses a diverse extended workforce. Contractors, supply chain employees, seasonal workers, temporary staff, and other non-employee workforce all present unique verification challenges in the digital age. These varied workforce categories often have limited physical interaction with the organizations they are supporting, making it even more difficult. As organizations increasingly rely on a flexible, distributed workforce, the need for robust remote authentication becomes paramount across all worker types.

To combat this evolving threat, iProov offers a Biometric solutions suite with solutions for Remote Onboarding and Authentication. These solutions go beyond traditional identity verification methods and offer a versatile solution that can be applied consistently across modern employment.

The Biometric Solutions suite provides:

  • Protection Against Presentation Attacks: distinguishes between a real person and a presentation attack, preventing the use of photos, videos, or masks to spoof the system.
  • Protection Against Digital Injection: It detects digitally injected attacks, including those using deepfakes that bypass device sensors.
  • Active Threat Management: detects and responds to emerging threats in real-time, with continuous updates delivered without any disruption to the customer or end user.
  • iProov Industry-Leading Outcomes, with >98% success rates 

… And much more. By implementing iProov’s Biometric Solutions Suite, organizations can significantly mitigate the risks associated with remote hiring fraud. This technology not only protects against immediate threats but also builds a foundation for secure remote work practices in the future.

Learn more about the iProov Biometric Solutions Suite in this Infographic

You can use our Biometric Adoption Navigator tool here to see if the iProov solutions suite is right for you.

How It Works: Implementing iProov in Your Hiring Process

In a remote hiring scenario, an applicant could verify themselves when they submit their application. They would scan a photo ID document, such as a driver’s license or passport, and then scan their physical face to prove that they are who they say they are.

iProov’s technology goes beyond simple facial matching. It ensures that the person is not only who they claim to be but is also a real person and is authenticating right now. This three-pronged approach – right person, real person, authenticating now – is crucial in combating sophisticated identity fraud such as the Knowbe4 incident.

The Urgency of the Threat: Protect Your Organization Today

The threat of sophisticated identity fraud in remote hiring is not a future concern — it’s happening now. Amlani warns, “It’s a pretty fast-growth fraud field to impersonate somebody else, especially during the hiring process. I’m hearing from numerous hiring leaders that they’re starting to see these types of attacks coming not just from individuals hoping to collect multiple paychecks, but actually from adversaries trying to get access to their systems.”

Stu Sjouwerman, KnowBe4’s founder and CEO, emphasized the importance of learning from their incident: “If it can happen to us, it can happen to almost anyone. Don’t let it happen to you.”

The Increasing Part of Identity Verification workforce deepfake

A Wake-up Call For Organizations Worldwide

As remote work and the use of extended workforces continue to grow, organizations must take the threat of sophisticated identity fraud seriously. Implementing robust identity verification processes can significantly reduce the risk of hiring fraud and protect against potentially catastrophic security breaches.

By leveraging iProov’s advanced biometric verification technology, companies can navigate this complex landscape, ensuring the integrity of their workforce and the security of their systems in an increasingly digital world. As deepfake technology evolves, so too must our defenses. iProov’s commitment to ongoing research and development ensures that its technology stays ahead of emerging threats, providing organizations with a robust shield against the ever-changing landscape of identity fraud.

The KnowBe4 Incident A Wake Up Call for Remote Hiring Security