Deepfake attacks: Rising fraud trends and our experience in preventing them

profile picture Matt Prendergast 2 min read

In 2024, we witnessed a significant increase in the number of deepfake attacks, or injection attacks during age and identity verification checks. The percentage of attacks increased from 1.6% to 3.9%.

In absolute terms, this is a significant rise in the total number of attacks we have detected as we significantly expanded our services in 2024. We now perform over 5 million checks per week across all our services. With the introduction of various regulations globally, companies have been obliged to implement more robust age or identity checks for their users. 

We have seen injection attacks across identity verification and facial age estimation across the Yoti network rise from a daily average around 1,000 attacks in February 2024 to a daily average of over 6,000 attacks in January 2025.

Injection attacks target remote verification services by attempting to bypass liveness detection, making them a growing threat to digital identity verification. Unlike direct attacks, which attempt to spoof a system using paper images, 2D or 3D masks, screen images, or video recordings, injection attacks try to manipulate the verification process by replacing the live camera feed with pre-recorded or synthetic images or videos.

This method allows fraudsters to present an altered identity, making them appear older, impersonate someone else, or pass as a real person when they are not. With minimal technical skill and free software, bad actors can inject these false visuals to trick authentication systems.

To counter these threats, Yoti’s AI Services technologies deploy multi-layered anti-spoofing measures. These ensure that the person being verified is both real and present – not a static image, mask, a deepfake or Gen AI – protecting businesses from identity fraud and unauthorised access​. 

Read our white paper on deepfake attacks or get in touch to learn more.