Here at Yoti, we’re on a mission to become the world’s most trusted identity platform. This isn’t something we plan on doing on our own but with the input, expertise and knowledge of people from all across society.
We have our Guardian Council of influential individuals who ensure that we always seek to do the right thing, and that we are transparent about what we are doing and why.
We also have an internal trust committee, who oversee the development and implementation of our ethical approaches.
Earlier this year, we held two roundtables with experts in their fields to discuss our approach to responsible research and the development of AI tools. We wanted to share the outcomes, so here they are.
The first roundtable: An introduction to our age estimation tech
Gavin Starks, the newest addition to our Guardian Council, hosted this roundtable in January, which brought together the likes of Yo-Da, the University of Warwick and Home Office Biometrics Ethics Committee, Women Leading in AI, the University of Keele, and techUK – to name a few.
The AI tool we discussed was our revolutionary facial age estimation technology (formerly known as Yoti Age Scan). It is currently available for over 13 year olds but we are looking at opening it up to younger children too. We want to ensure that people understand exactly how it works and can be reassured by the steps we’ve taken to mitigate bad outcomes for individuals. We’ve also published a white paper, which you can find here.
In the session, we demoed a self-checkout machine that had been integrated with our facial age estimation, so everyone could see exactly how the technology worked. We then looked at the white paper to make sure everyone had a more in-depth understanding.
This roundtable was hugely insightful and left us with a lot to think about. We’ve been taking steps to ensure our approach to responsible AI is as robust as possible. The session also raised a discussion about how we obtain user consent to use their data for R&D purposes. We’re now working on granular opt-out choices for individuals.
The second roundtable: Ethical challenges
We’re really proud of the positive impact our facial age estimation is already having, such as our work with Yubo in making their community is safe for everyone. However, at the moment our age estimation technology only works for people over 13 years old. We want to make sure that it works for everyone.
Ensuring our facial age estimation works for under 13s, and doing so in our usual responsible way, involves lots of challenges. This led us to hold our second roundtable, to focus on what these challenges might be before we cross them.
We were lucky to have Gavin Starks host again and were joined by representatives from the Children’s Commissioner’s for England, the NSPCC, the ICO and GCHQ, amongst others.
Here’s what happened
We organised the session using a framework created by Doteveryone, a London-based think tank that champions responsible technology to build a fairer future. Once we explained the age estimation technology behind our product and grounded it in the wider context, we split the group into smaller groups. Then we horizon scanned for the intended and unintended positive and negative consequences of developing and deploying our facial age estimation for under 13s.
Just like the first roundtable, the session involved deep deliberation and produced some valuable insights. One of the unintended positive consequences of using our facial age estimation for under 13s was that it might increase the autonomy of children at a time when they are forging their own identities. However, we also raised the issue that the technology might facilitate the exclusion of young people from digital spaces.
What’s next?
We’ll use this feedback to help us make age estimation for under 13s as robust as possible. We’ve got a lot planned, such as further sessions with industry leaders. We’ll keep you posted on our progress and would love to hear your thoughts.