EU Legislation on AI

Should you prepare your ML model to be compliant?

Posted by Alexander Meinke on April 19, 2022 · 20 mins read

Introduction

On the 4th of April 2021 the European Commission proposed a draft for the regulation of artificial intelligence systems that are to be used within the EU. Maybe you have even heard about it and have been losing sleep over the question "Does my AI model need to be compliant?", but haven't had the time to go through 130 pages of legal jargon in order to find out. Luckily, you have come to the right place. I will break down which applications are affected, what the requirements are and what scary things await companies in case of non-compliance.

When is the law coming into effect?

The EU has several ways of influencing the governance of its member states so we need to get one distinction straight: the difference between directives and regulations. A directive is a rule that each member state has to then convert into national law until some fixed due date. A regulation on the other hand is legally binding directly in each member state as soon as it comes into effect at the EU level. The legislation we are talking about is a regulation. This means that, in principle, as soon as it passes, compliance could become necessary. In practice, it is likely that the law gets passed some time in 2023 and that it will come with some grace period to allow companies to get ready. At the time of writing however, the exact date is not known.

So what does the law actually say?

Of course, we first need to know what the lawmakers consider to be "AI". To quote the definition in the text: machine learning, logic- and knowledge-based approaches and statistical approaches. It's fair to say that this is as broad as one can make it.

However, rather than putting blanket restrictions on every type of AI system under the sun, the regulation classifies all applications of AI systems into three broad categories, based on the risk they pose: unacceptable risk (completely banned), high risk (need to satisfy stringent requirements) and low or minimal risk (no or next to no special requirements).

Unacceptable risk

This category basically tries to capture most applications of AI that would clearly be unethical with respect to the values of the European Union. As such, all applications that fall under this umbrella will be completely prohibited within the EU. These unethical uses include the subliminal manipulation of individuals, the exploitation of vulnerable groups (like children, elderly etc.) or social scoring systems (like the infamous Chinese social credit system) that assign a measure of trustworthiness to individuals using data that has been aggregated for completely unrelated purposes or that unjustly discriminate against certain groups. The precise boundaries will likely have to be drawn by the courts. For example, will personalized feeds on social media platforms be deemed to be subliminal manipulation? Or, according to this law, when is the unfavourable treatment of some group justified and proportionate and when is it not?

Additionally, there are restrictions on law enforcement using real-time biometric identification (such as facial recognition) in public spaces. Essentially, the police are only allowed to use such systems when they can justify it for a specific and limited purpose, rather than for large-scale surveillance.

High risk

Now we get to the most interesting (and most complicated) category in the proposal. There are two lists of applications that are classified as high-risk. If your application is on either of them then you will need to satisfy a laundry list of requirements. However, there are exceptions and those apply if your application is already covered by legislation in yet another list. Let's try to break it down.

The first list consists of a host of existing legislations , and if your AI system is a product (or safety component of a product) that already needs to pass conformity assessments under the existing legislation, then it will soon additionally need to comply with the ``high-risk AI'' requirements. Think industrial machines, safety equipment and devices for medical diagnoses. For the brave readers, I have included the entire list below with links to the relevant laws. The important caveat is that the application is exempt if it already falls under the scope of any of the eight exceptions below, which cover many obvious applications such as automotive and aerospace. For these cases the European Commission intends to amend the existing acts in order to incorporate requirements for AI systems for the individual sectors.

The second list of high-risk applications consists of potentially harmful uses of AI that are not already regulated via third-party conformity assessments. This includes uses like tools for recruitment, university admissions, credit scoring or criminal justice (full details in Annex III).

So if after all these complicated rules you have finally figured out that your application actually is considered high-risk, what exactly does the law ask of you? There are quite a few different requirements so it is, yet again, time for a list!

As you can see, conformity with these requirements actually requires some quite elaborate measures and it certainly requires planning ahead. While the European Commission estimates the costs of compliance to lie on the order of a few thousand Euros, industry experts are highly sceptical of these numbers. My personal guess is that just the requirements on data quality alone are likely more costly than the 6000-7000€ that the Commission has estimated.

So that begs the question, what happens if you slip up? The fines can be quite serious. Providing wrong or misleading information about your AI system to the authorities can already cost up to either 10 million euros or 2% of annual revenue - whichever one is higher! Outright non-compliance with the requirements can even cost twice as much and, when violating the requirements on data management, even three times as much.

Low or minimal risk

Finally, we have all other applications of AI. For these there are basically no requirements at all. The only exception applies to AI systems that interact with humans (like chatbots), use emotion recognition methods or deep fakes. In each of these cases, the human has to be informed that they are interacting with an AI or AI-generated content.

Conclusion

In short, if you suspect that your AI application may fall under the umbrella of ``high-risk'', the time is now to start preparing for compliance. The legislative proposal is of course full of many more interesting details, like how exactly the member states of the EU are to implement the measures or the special provisions given to small-scale providers and start-ups. But, since this article is complicated enough as it is, I will cover these topics in future posts. If you are considering an AI project in high-risk applications and have questions about the current state of affairs, feel free to reach out!