Biometrics has been drafted into the battle between AI fraud and AI defenses

Biometrics implementations for public services are expanding in countries around the world, even as AI-enabled fraud and deepfakes threaten online trust. Businesses and governments are grappling with a rash of fraud enabled, supercharged and scaled by AI, from biometric spoof attacks and scaled fake IDs to phishing bots. The treatment for this global pandemic, by consensus amongst the professionals, is more AI, as part or parts in a layered defense.
Fake people and real webinars
The layers must include injection attack detection to spot the delivery of deepfakes created with GenAI. Mitek Director of Market Strategy and Intelligence Carmel Maher explained the approach and provided some examples of how AI is used in payment fraud in a recent webinar hosted by Biometric Update.
Fortunately, AI can also detect both content generated by AI and content delivered by AI to commit fraud, as AuthenticID and Darwinium will explain in the next Biometric Update webinar.
And the stakes are high. Sift CMO Armen Najarian argues in a Biometric Update guest post that declined transactions based on false positives are the hidden cost of AI-enabled fraud, and offers some advice for the capabilities that organizations need.
The layers and capabilities needed also go beyond AI, to measures including a fine balance of regulation, as discussed in a webinar conversation between AuthenticID, Keyless and DuckDuckGoose executives. The debate, hosted by Peak IDV and the Prism Project, also raised questions like whether it even makes sense to continue conducting job interviews online.
Fake and real, for good and bad
AI agents further complicate the situation. With AI agents expected to grow to a $52.6 billion market by 2030, tools like those from ZeroBiometrics, Frontegg and Anetac for telling humans from non-humans and good bots from bad ones will be in high demand.
World’s attempt to scale its own answer to the problem of differentiating non-human identities hit another barrier this week with Kenya’s High Court demanding the company delete all biometric data collected from Kenyans under a regulator’s watch.
Canadian, Dutch and Danish media outlets appear to be responsible for the shutdown of the world’s largest non-consensual deepfake pornography site, Mr. Deepfakes. As usual, those least concerned about other people’s privacy are highly protective of their own.
Meanwhile, another notorious use of AI, the pervasive surveillance of Xinjiang’s Uyghurs with tech from Huawei, Hikvision and Dahua has landed them in a French court.
Digital public services now, REAL ID eventually
Gov.uk is switching from SMS-based verification to passkeys as the UK government attempts to get its digital public services house in order. Similarly, governments in Nigeria, Ghana and Somalia each unveiled improvements to their digital ID systems. All three will be discussed at ID4Africa’s 2025 AGM May 20 to 23, along with the recent advances in digital transformation for public services by host Ethiopia. Biometric Update will cover the event from on location in Addis Ababa.
The U.S. government’s new approach to online trust involves deregulation and less funding for deepfake research, and comments on its AI Action Plan, including from iProov and Pindrop, reflect concerns with this direction. If whatever regulatory measures are in place are enforced like this week’s spoof deadline for REAL ID, other means of protection only take on greater importance.
The USPTO is turning to the private sector, ID.me specifically, to secure its Patent Center platform.
Please tell us about any podcasts, webinars or other content we should share with the people in biometrics and the digital identity community either in the comments below or through social media.
Article Topics
AI fraud | biometrics | deepfakes | digital ID | digital identity | fraud prevention | identity management | week in review
Comments