Thoughts from our CEO

profile picture Rachael Trotman 7 min read
The thoughts of Robin Tombs, Yoti CEO, May 2024

In this blog series, our CEO Robin Tombs will be sharing his experience, whilst focusing on major themes, news and issues in the world of identity verification and age assurance.

This month, Robin gives a summary of recent age verification events, the growing threat of AI voice cloning scams and how Digital IDs can protect you from online scams. 

 

Summary of age verification events

The last few months have been a really busy time in the age verification world. Here’s a snapshot of some of the key events from the last 10 weeks:

11 March – the US Court of Appeals for the 5th Circuit rejects the injunction against the Texas law requiring age verification for online porn. 

25 March – US Florida Governor Ron DeSantis signs an age verification law that requires social media platforms to prevent children under the age of 14 from creating accounts. The bill also requires commercial apps and websites that have a “substantial portion of material harmful to minors,” defined as more than a third of content on the site, to verify the age of their users. Businesses must give users the option of “anonymous age verification”, defined as verification by a third party that cannot retain identifying information after the age check is complete. 

29 MarchThe UK Home Office consultation period closes on changing the law to allow digital proof of age for the sale of alcohol in licensed premises. 

8-12 April – The first Global Age Assurance Summit takes place in Manchester, UK with over 600 attendees. It was an action packed event, with lots of debate and discussion around the importance of age assurance for online safety, and why age assurance needs to be private, secure and reliable.

10 April – The National Institute of Standards and Technology (NIST) shares a preview of their independent evaluation of facial age estimation algorithms, with the full results expected to be published by the end of May.

15 April – US Kansas Governor Laura Kelly allows an age verification law for online porn to become law.

23 April – US Georgia Governor Brian Kemp signs an age verification law for social media. The bill says social media platforms would have to use “commercially reasonable efforts” to verify someone’s age by 1st July 2025. Parents of children under 16 will need to consent to their child using a platform.

30 April – The US Supreme Court refuses to block the Texas law requiring age verification for online porn. 

30 April – Edeka, Germany’s largest supermarket group, announces the use of Diebold Nixdorf Vynamic Smart Vision which uses our facial age estimation technology. This is enabling automation of over 80% of age-restricted purchases.

30 AprilThe Australian Government announces it will test online age verification methods to protect children from harmful content, like pornography and other age-restricted online services. The pilot will identify available age assurance products to protect children from online harm, and test their efficacy, including in relation to privacy and security. 

8 May – Ofcom publishes their proposed measures to improve children’s online safety. From summer 2025, Ofcom expects “much greater use of highly-effective age-assurance so that services know which of their users are children in order to keep them safe.”

With so much going on in the age assurance world, I expect the next few months will be just as busy. 

 

Highly effective age checks

There’s been some confusion recently as to which age checks Ofcom will allow or deem effective to protect children from online harm. For example, one news article said that children will have to show ID to use social media and that firms are told to use facial recognition technology to prevent children accessing harmful content.

Firstly, children will not be forced to upload an identity document to use social media. Submitting an ID and live selfie (to prove the owner of the document is the one completing the age check) will be one effective method businesses can use. But it will not be the only option. 

Secondly, Ofcom is not telling businesses to use facial recognition as part of their age checks. 1:1 facial recognition is needed to match a user’s live selfie to the photo on their identity document. But Ofcom has made it clear that businesses can offer facial age estimation, which is not facial recognition because it does not include any identification. 

Businesses should offer users a range of options so that users can choose how they want to prove their age online. From our experience working across social media, vaping, gaming, dating and adult sites, 65-90% of users choose facial age estimation. This is because it’s easy, quick and private (no personally identifiable information shared and we delete the facial image immediately after estimating age).

Facial age estimation is by far the most inclusive method for the majority of teenagers to prove they’re over 13 and for the vast majority of adults to prove they’re over 18.

 

AI voice cloning

AI voice cloning is getting cheaper, quicker and easier for scammers to use, as explained in a recent Sunday Times article. In the article, both Alex West from PwC and Oliver Devane from McAfee give some good tips to the public to try and avoid losing money to fraudsters, such as:

  • If you are in a pressured situation, take a step back and ask if it all stacks up.
  • If in doubt, verify who the caller is by asking a question that only the real person would know the answer to.
  •  If you are not sure, hang up and call the person back on the number you have stored for them. 

Unfortunately, these types of scams are becoming more common. Just this month the CEO of WPP, the world’s biggest ad agency, was the target of a deepfake scam that involved an AI voice clone.

Our Digital ID apps give people a simple and effective way to protect themselves from online scams. If a family member or employee in a finance department is being asked for money and is not sure whether the request is real, they can ask the person to swap verified details (like name and photo) using our Digital ID.

Only an individual can access and use their Digital ID as it’s linked to their phone and unique biometrics. This gives people confidence that the details shared with them are real and shared by the correct person. A scammer who is using a convincing AI voice clone or deepfake video can’t share verified details about the person they are impersonating. 

Scammers, who have had a field day targeting UK people online over the last decade, will come to hate high quality reusable Digital IDs.

 

How Digital IDs can protect you from scams

The Digital ID feature of swapping verified information can also be used when talking to people online – for instance on a dating site or when buying or selling a secondhand item. 

I read a depressing article this month about a fraudster scamming victims he met on Tinder. Very few dating sites require background identity checks for all daters, or such checks are voluntary. This means fraudsters can easily create fake accounts and pretend to be another real or fictitious person.

And a recent Which survey of 1,300 buyers found a whopping 32% had been scammed on secondhand marketplaces in the last two years. 

With the free Yoti ID or Post Office EasyID app, you can share a verified name and photo with someone you meet online. Whether it’s on a dating site or a secondhand marketplace, it’s a simple way to protect yourself from scams and create more trust about who you’re talking to online.