The impact of AI and deepfakes on identity verification

27

The impact of AI and deepfakes on identity verification

A person's face against a digital background.
(Image credit: Shutterstock / meamorworks)

In the digital landscape, where identities are woven into every aspect of our online interactions, the emergence of AI-driven deepfakes has become a disruptive force, challenging the very essence of identity verification. In navigating this ever-evolving terrain, CIOs and IT leaders must dissect the intricate interplay between emerging technologies and their profound impact on the integrity of identity management processes.

Online identity verification today consists of two key steps. Firstly, a user being asked to take a picture of their government-issued identity document, which is inspected for authenticity. And secondly, the user being asked to take a selfie, which is biometrically compared to the picture on the identity document. Traditionally only used in regulated know-your-customer (KYC) use cases such as online bank account opening, identity verification is now used in a range of contexts today from interactions with government services, preserving the integrity of online marketplace platforms, employee onboarding, and improving security during password reset processes.

Subversion of the identity verification process through fraudulent identity presentation, for example by using a deepfake of an individual to defeat the selfie step, thus introduces considerable risk to an organization.

Akif Khan

VP Analyst, Gartner.

1. Mechanisms to subvert deepfake attacks

As attackers leverage the relentless progress of GenAI to craft increasingly convincing deepfakes, CIOs and IT leaders must adopt a proactive stance, bolstering their defenses with a multifaceted approach. Key to this is ensuring that your identity verification vendor deploys robust liveness detection.

This capability is deployed during the second step when the selfie is being taken, to check whether the selfie is in fact being taken of a live person who is genuinely present during the interaction. This liveness detection can be active, in which a user responds to a prompt such as turning their head, or it may be passive, in which subtle features such as micro movements or depth perspective are assessed without the user having to move.

The integration of active and passive liveness detection techniques, coupled with additional signals indicative of an attack, offers a holistic defense framework against evolving deepfake attacks. Such additional signals that can indicate an attack can be revealed using device profiling, behavioral analytics and location intelligence. Identity verification vendors may develop some of these capabilities natively, or use partners to deliver them, but they should be packaged up as a single solution for you to deploy.

2. Leveraging GenAI to improve identity verification

The versatility of GenAI presents intriguing opportunities for defense against deepfake attacks. By leveraging GenAI’s ability to develop synthetic datasets, product leaders can reverse-engineer attack variants and fine-tune their algorithms for improved detection rates. Beyond cybersecurity applications, GenAI can also address issues of demographic bias in face biometrics processes.

Traditional methods of obtaining diverse training datasets pose challenges in terms of cost and effort, often resulting in biased machine-learning algorithms. However, the creation of deepfake images using GenAI offers a solution by generating large datasets of synthetic faces with artificially elevated levels of training data for underrepresented demographic groups. This not only reduces the barriers to obtaining diverse data sets but also helps minimize bias in biometric processes. Challenge your identity verification vendors as to whether they are innovating and using GenAI for positive purposes, not just treating it as a threat.

Select vendors who have embraced this new world and taken proactive measures such as introducing bounty programs to challenge hackers to defeat liveness detection processes. By incentivizing individuals to identify and report potential vulnerabilities, vendors and hence organizations can bolster their defensive capabilities against deepfake attacks.

As we chart a course towards a secure digital future, collaboration emerges as the cornerstone of our collective defense against deepfake adversaries. By fostering dynamic partnerships and cultivating a culture of vigilance, CIOs and IT leaders can forge a resilient ecosystem that withstands the relentless onslaught of AI-driven deception. Armed with insight, innovation, and a steadfast commitment to authenticity, look to embark on a journey towards a future where identities remain inviolable in the face of technological upheaval.

Link!

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Akif Khan

Akif Khan, VP Analyst, Gartner.

Latest
A Verizon store with a customer stood outside using their mobile cell phone

Don’t blame Apple for the US smartphone market, blame the US carriers

See more latest ►
Previous articleEnterprises must follow five key steps to adopt AI securely
Next articlePhilips Hue lights to work way better with Samsung TVs and SmartThings, for a price