AI WATCH EPISODE #6
Decoding the EU AI Act
By Mikael Munck, Clara Foged Andersen, Niels Christian Ellegaard
The main objective of the EU AI Act is to mitigate the risks of using AI systems. And the way you do that is by enforcing compliance obligations on the whole supply chain.
Welcome to our sixth episode of AI Watch!
In this episode, Clara Andersen interviews guest speaker Niels Christian Ellegaard, a partner at Plesner and head of the law firm’s IT, Telecom, and Outsourcing Team. Together, Clara and Niels delve into the new requirements for businesses and other stakeholders, with emphasis on the AI Act’s risk assessment framework and the compliance obligations required for high-risk AI systems.
So if you’re here to learn more about what the AI Act will mean for your own organization, then by all means please read on!
And if you’re in a hurry, no worries—we’ve compiled a brief list of takeaways below:
- The EU AI Act should come into force in January 2026.
- The AI Act will divide AI systems by risk, with different requirements for each category.
- Organizations using AI must adhere to these requirements.
- Organizations should proactively examine their AI systems to gauge the likely risk level.
- Your organization should have a strategy in place to achieve compliance or replace the relevant AI system(s).
The AI Act timeline
Clara: In June, the EU AI Act was approved by the European Commission. Can you give us a timeline explaining how we got to this point?
Niels: Yes, it’s been a long journey. We got some whitepapers back in the 2010s, and then in 2021 we got the EU AI Act first text, which has since gone through a number of committees. And, as you mentioned, the new adopted text was approved by the European Parliament in June.
We expect the final text will be adopted by the European Council by the end of 2023. Following a two-year implementation period in 2024 and 2025, the EU AI Act will probably enter into force on January 1st, 2026 (assuming we remain on schedule).
Risk frameworks and their requirements
Clara: Could you give us a high-level explanation of the EU AI Act framework?
Niels: Basically, the EU AI Act divides AI systems into three brackets: prohibited systems, high-risk systems, and what we could call “ordinary systems.”
Prohibited systems include, for instance, social scoring and manipulative systems. These systems are entirely prohibited within the EU, whereas high-risk systems are allowed as long as you comply with the requirements, which are quite comprehensive.
Then you have the ordinary systems, which are subject to a limited number of requirements about transparency. When establishing the full supply chain within AI systems, you find that mainly the requirements focus on the providers by requiring, say, conformity assessments and risk management systems.
Clara: On the subject of high-risk systems, what exactly defines a high-risk system compared to a prohibited or ordinary system?
Niels: The high-risk systems are really a set of systems, and they focus very much on social interaction. Some examples include: education and vocational training, employment, workforce management, access to social benefits, credit scoring and the like, law enforcement, asylum, and also administration of justice.
You may ask, how about AI systems for medical purposes? These are not covered, although they will probably be dealt with through other regulations. So you shouldn’t see this list as exhaustive in the sense that AI systems may be addressed by other regulations.
One peculiar thing is that AI systems used exclusively for military purposes are entirely excluded from the scope of the AI Act.
Also, looking at the administration of justice and democratic processes, where they have defined high-risk systems for the use by courts to assess facts and law—that use case does not comprise the use of similar systems by lawyers! That means we could end up in a situation where lawyers can use these systems without their being categorized as high-risk systems, but judges cannot.
Eventually, providers could avoid the hassle of compliance burdens altogether by making these systems available only to lawyers. So we could end up eventually in a situation where courts do not have the same tools as lawyers, putting them in a very poor position.
Changes driven by generative AI
Clara: Let’s talk about LLMs and Generative AI. In your opinion, how has this technology changed the EU AI Act?
Niels: That has changed it a lot. If you look at the definition of an AI system, at the outset it was defined through the techniques applied.
But now we have new definitions of what constitutes an AI system, which are less focused on the techniques. Rather, we now have a more abstract, generic definition focusing on whether a system can act with a certain level of autonomy.
The goals of the AI Act
Clara: Regarding the main objective of the EU AI Act, could you give us some high-level insights into that?
Niels: The main objective of the EU AI Act is to mitigate the risks of using AI systems. And the way you do that is by enforcing compliance obligations on the whole supply chain.
That said, these obligations apply to high-risk systems only. So if you have an ordinary system that isn’t on the list of high-risk systems, you are subject only to a few transparency requirements, basically.
Clara: So maybe you could give some insights into the actual application of the act.
Niels: Yeah, I think the best way to go forward is to say what kind of systems are on the list of high-risk systems. That’s really the objective: to hit those systems with requirements.
A good example is credit scoring, where you have banks and other credit institutions using AI to credit-score natural persons. You need to document that you’ve used the right data and so on.
You also have to apply human oversight. So if you (or me!) are subject to credit scoring, then you must, in any given situation, be able to make it a human decision rather than a machine decision.
Takeaways for companies
Clara: To wrap this episode up, what would be the three key takeaways for companies to consider?
Niels: I think the first takeaway is to find out what kind of system you’re dealing with. Will it be a prohibited system, a high-risk system, or just an ordinary system? Of course, this may change over time, but you need to find out, where am I now?
The second takeaway, if you’re dealing with a high-risk system, is to consider the obligations that will rest upon you. Will you be able to meet those requirements? Do you need to do something now?
And the third takeaway is, once you’ve figured out what you have to do, to ask whether you can meet those requirements before January 2026, when the AI Act comes into force.
So these would be my focus areas if I were dealing personally with an AI system, just to make sure that I don’t invest in something that is prohibited or else very difficult to account for under the EU AI Act.
AI Watch Video Newsletter
Get the latest know-how from those in the know. Sign up for our AI Watch Newsletter and receive the latest insights from AI experts.