AI Insights, FEBRUARY 2024
Exploring the state of regulatory preparedness among EU companies
Our EU AI Act assessment unveils critical insights on AI use, in-house organization, and perceived risk.
In November 2023, 2021.AI released an assessment to gauge how organizations were using AI and preparing for the EU AI Act. The results provide an eye-opening glimpse into the current AI landscape (or at least our slice of it).
What is the EU AI Act assessment?
We crafted this assessment to evaluate companies’ level of preparedness for the upcoming EU AI Act. Because the EU AI Act will require companies to assess and mitigate the risks posed by their AI models, it is vital that companies like 2021.AI understand the current needs of organizations across the AI landscape to help them meet this challenge—and equally vital that organizations understand the next steps in the process.
Who completed the assessment?
We received 156 responses to our assessment. Of these respondents, 39% were “enterprise” businesses; the remainder comprised medium- or small-sized businesses. 81% were based in, or did business in, the European Union.
What’s the data?
Here’s what our respondents had to say:
- 90% use AI in their organization.
- 70% have an in-house AI/ML team.
- 20% estimate their average AI model risk to be “high” (i.e. poses significant risk to health, safety, or human rights).
- 4% estimate their average AI model risk to be “prohibited” (i.e. violates human dignity; will be banned under the EU AI Act).
In short, our respondents are highly engaged with AI on a business level. While most depend on no- or limited-risk AI models, nearly a quarter make significant use of high- or prohibited-level models. These businesses will face stringent compliance requirements under the EU AI Act.
Critically, a number of respondents expressed uncertainty about the nature of AI in their organization. 25% did not know what level of risk their AI models entailed, and 19% weren’t sure what role their company played in the AI ecosystem (i.e. as a developer, deployer, importer, or distributor).
Presuming that our survey respondents have a deeper understanding of AI than does the average EU employee, and/or that our respondent companies are more engaged in Responsible AI practices than is the average company, these statistics may belie the real level of employee uncertainty within EU organizations. Should that be the case, organizations have a long way to go in communicating their AI use and policies to employees.
What are the takeaways?
Upon entry into force, the EU AI Act will apply strict regulatory requirements to any company using AI within the EU. This presents two paths for companies: take a proactive approach now or a reactive approach later. (There is a right answer—it’s the first one!)
While companies do not need to maintain an in-house team to effectively use AI, companies must have strategies and guidelines in place to effectively identify, monitor, and mitigate the potential risks arising from AI use.
To that end, we present four actionable steps:
- Take stock of the AI models your company uses now.
- Evaluate these models’ current and future risk levels.
- Craft a compliance strategy that adheres to regulations and works for your company.
- Communicate this strategy throughout the company. Ensure everyone knows what AI is being used and how the company will respond.
If you’re still not sure how to formulate a solid compliance strategy, that’s okay. As specialists in Governance, Risk, and Compliance (GRC) across the European digital landscape, 2021.AI is here to help.
Click here to book your EU AI Act Advisory session.
You might also like…
The long-awaited EU AI Act: A comprehensive timeline and preparation guide
The European Union’s Artificial Intelligence Act (EU AI Act) is a landmark piece of legislation that aims to establish…
The EU Rules on Artificial Intelligence – Five actions to consider now!
For a couple of years, the EU Commission has worked on rules, regulations, and incentives around Artificial Intelligence…