Preparing for the EU’s new AI Act

profile picture Amba Karsondas 7 min read
Image of a man using his laptop. The accompanying text reads "AI Act - European Union".

Artificial intelligence (AI) is changing our world at a speed that, just a decade ago, we never could’ve anticipated. As AI finds its way into our everyday lives, regulators are racing to catch up with its development.

In response, last month, the EU voted to bring in the Artificial Intelligence Act, also known as the AI Act. The Act is expected to enter into force in May or June 2024.

This blog looks at what the legislation means for businesses and how they can comply.

 

Why is there an AI Act?

In recent years, it seems as though AI is everywhere. It’s shaping the world around us and has many useful applications: from using your face to unlock your phone to personalising your social media feeds to its use in the healthcare industry.

But as AI becomes increasingly widespread, regulators have become concerned about its limitations. There are growing demands for ethical safeguards to be put into place. There have been calls for AI systems to be regulated so that they are designed to uphold our fundamental rights, values and freedoms.

For this reason, the EU proposed the AI Act in 2021. It’s worth noting that even in the time that the Act has moved through the relevant legislative bodies, there have been significant developments in the world of AI. The explosive growth of generative AI over the past year has meant the Act needed to be significantly revised whilst the legislation was being passed.

Therefore, we can expect more legislation will emerge in the near future as AI evolves.

 

What does the AI Act do?

The AI Act marks a milestone as the first of its kind globally. It hopes to establish a common legal framework for the development, marketing and use of AI. The Act aims to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.

It aims to foster trustworthy AI in the EU by:

  • ensuring that AI systems respect fundamental rights, safety, and ethical principles
  • addressing risks of very powerful and impactful AI models
  • encouraging a single EU market for AI

The Act also stresses that high-risk AI systems shouldn’t rely solely on automation. Instead, they should be overseen by humans to reduce risk and minimise any harmful outcomes.

 

Who is affected by the AI Act?

With some limited exceptions, the AI Act applies to almost all sectors across the EU. It affects all parties involved in the development, usage, import, distribution, or manufacturing of AI systems in the EU. Even if they are based outside of the EU, organisations that intend to use their AI systems in the EU must also comply with it.

The Act splits AI systems into four main risk categories. They are:

  1. unacceptable risk – this is the highest level of risk. Examples include subliminal messaging or any system which exploits the vulnerabilities of children. Any AI system that falls into this category will be prohibited under the Act.
  2. high-risk – these systems will be subject to particular legal requirements. Businesses must complete conformity assessments and sign a declaration of conformity before the system is made publicly available. They must register the AI system in a specialised EU database. The provider must also meet specific security, transparency and quality controls including a risk management system, robust data governance processes and human oversight. A high-risk case could be the AI systems used in educational or vocational training. For example, a system that scores exams may determine someone’s access to education or the professional course of their life.
  3. limited risk – these are systems that have a risk of manipulation. For these, the AI Act requires specific transparency obligations. Users must be informed that they are interacting with AI so they can make an informed choice about whether to continue using the system. Examples include chatbots and generative AI that is not considered high-risk.
  4. minimal or no risk – this is the lowest level of risk and includes AI systems such as spam filters or AI-enabled gaming. These systems do not have any obligations or additional restrictions.

 

How can businesses comply with the AI Act?

After the AI Act enters into force, businesses will have between 6 and 24 months to meet its full requirements. The exact time frame depends on the risk level of the AI system.

It’s often more costly and complex to ensure that a system complies with legislation once it has been developed and deployed. Therefore, businesses should have already started to assess their risks and adapt their processes. Alongside seeking comprehensive legal advice, you should:

  • Work out your organisation’s relationship to AI. Obligations will vary depending on if your company is a provider, deployer, importer, distributor or a party affected by AI systems.
  • Map your current use of AI systems and models to understand your company’s exposure to AI. As part of this process, your business should assess how these systems link to real-world use cases. This will help you to better understand the practical implications of these AI systems.
  • Identify and evaluate the risks associated with your company’s AI systems. By consulting the standards recommended by the European AI Office and the European Commission, platforms should design new systems and adapt their existing ones in line with the new regulation.
  • Build a framework that ensures that each process is regularly reviewed. As AI is still in the midst of rapidly evolving, it’s highly likely that existing legislation will be changed and new rules will emerge.

 

Who will oversee the AI Act?

The AI Act brings in strict requirements for providers and deployers of AI systems. Non-compliance can result in penalties of up to €35 million or 7% of the global annual turnover (whichever is higher). The figure depends on the size of the company and the severity of the infringement.

The newly created European AI Office will oversee its enforcement and implementation. The Act also calls for the introduction of a European Artificial Intelligence Board. The national regulators designated by each Member State will be represented on the Board, with their main task being to ensure compliance with the regulation. It will also advise on secondary legislation, codes of conduct and technical standards.

 

Adapting to new AI regulations

Florian Chevoppe-Verdier, Public Policy Associate at Yoti said, “Artificial Intelligence is becoming increasingly present in our daily lives, holding tremendous potential to enhance society, provided we identify, understand, and mitigate its potential shortcomings. Although the EU’s AI Act represents a distinct regulatory approach from those of the UK and US, we are likely to see a ‘Brussels effect’, wherein EU regulations could establish a global standard, akin to the GDPR’s reach.

Navigating the assessment of AI systems and adapting to a rapidly evolving regulatory landscape will undoubtedly pose challenges for businesses in the years ahead, but the AI Act also presents them with an opportunity to thrive and contribute positively to societal advancement.”

If you’d like to know more about how we use AI responsibly, please get in touch.

Keep reading

An image of a woman who is looking at her driving licence. The accompanying text next to the image reads “Tobacco and Vapes Bill - United Kingdom”.

Understanding age assurance in the UK’s Tobacco and Vapes Bill

In a significant move towards tightening regulations on tobacco and vaping products, the UK has introduced the Tobacco and Vapes Bill. Originally introduced by the previous Conservative government, the Bill has now been reintroduced by the new Labour government, signalling bipartisan support. The Bill aims to create a “smoke-free generation” by gradually raising the age of sale for tobacco and vaping products every year until they are completely phased out across the UK.   What is the main aim of the Tobacco and Vapes Bill? The Tobacco and Vapes Bill seeks to tighten the regulatory framework around tobacco and

7 min read
Image of a man holding his mobile phone in one hand and a driving licence in the other hand. The accompanying text reads "Data Bill - United Kingdom".

Understanding the UK’s new Data Bill

The Data (Use and Access) Bill, known more simply as the “Data Bill”, is a landmark piece of UK legislation that aims to reshape how individuals and businesses interact with digital data. It will introduce provisions for a national digital identity trust framework, helping to foster trust in digital identities by ensuring that businesses adhere to strict standards during digital transactions.  This blog gives an overview of the Data Bill and what this means for digital identities in the UK.    Why has the Government introduced the Data Bill? The Government has said that the Bill will “unlock the

9 min read
An image of a young boy who is wearing a red hoodie and looking down at his mobile phone. The accompanying text next to the image reads “Minors’ access to pornographic content: France”.

French regulator Arcom introduces age checks for online adult content

This month, Arcom, the French regulator responsible for online porn, has announced the date platforms will need to introduce age checks. From 11th January 2025, adult operators and platforms with pornographic content will need to check the age of users, ensuring only adults can access the content. There will be a three month transitional period, where temporary methods like bank card verification can be used as a preliminary age filter, but they must include strong authentication to ensure that the user is the cardholder. After the transitional period ends on 11 April 2025, adult site operators will need to

5 min read