The EU Rules on Artificial Intelligence – Five actions to consider now!
For a couple of years, the EU Commission has worked on rules, regulations, and incentives around Artificial Intelligence. On April 21st, it released the long-awaited proposal for harmonized regulations on AI. The proposal to be adopted in national legislation is a far-reaching set of rules that will impact all organizations, at par and perhaps even more comprehensive, than GDPR. Understanding the basics of the new rules and the organizational impact is a requirement for all senior executives.
The announcement includes a large number of new requirements for organizations, whether providers or users of AI. Not all the specific actions for organizations are immediately apparent in the EU Commission’s summary nor clear from the press coverage following the announcement. However, the main points of the announcement are:
Levels of AI Risk
The central and critical element of the announcement is the splitting of AI solutions and products into four categories according to risk. The proposal defines AI risk as to the ability to consciously and unconsciously manipulate humans, violate people’s rights, pose a danger to people or enable social scoring or segmentation. The risk categories are:
- Unacceptable risk, such as systems that manipulate a human’s free will
- High risk, examples of which are transportation that may pose a threat to humans, scoring of people in credit or employment situations, AI in machinery such as operating robots, or AI in policing situations. The number of high-risk AI usage areas is significant and will require actions to be taken by providers of products or services using AI and users deploying these AI products and services.
- Limited risk, which can be simple chatbots or simple machinery operations.
- Minimal risk, such as simple tools for non-critical situations and using non-critical data.
A product or service may move with risk categories over time, meaning that the placement in a risk category is not fixed.
Application of AI rules
The new rules apply to any organization, provider, or user offering an AI product or solution even if the solution runs on a platform outside the EU but is offered virtually within an EU country. The rules also apply to machinery with AI embedded. (This has led to enhancements in the EU rules applying to machinery products). The EU rules for AI are expected will evolve and will be governed by national and EU-level oversight boards.
Monitoring and tracking
The rules require providers of products and services that incorporate AI to monitor and track the data, functionality, and intent and do so over time. IN other words, track the AI risk level. However, the risk categorization of an AI product or solution is not static and may evolve. This may happen as the product or solution evolves either through changes made by the provider or as the user of the product or service incorporates new data sets or AI functionality. Tracking high-risk AI solutions and products will happen at the national level and be included in an EU central database. As part of this tracking, high-risk AI products and services will be certified through a CE mark. Thus, allowing users to understand the AI risk of products and solutions purchased more clearly.
Implementation in legislation
The proposal from the commission will now need to be discussed, possibly amended a few places, and then adopted and incorporated into national legislation. However, the timing of this remains a little vague, albeit not far into the future. Once that happens, organizations with business in the EU countries will need to comply with the rules and put the responsibility on leaders in every organization.
The rules carry the potential for significant penalties for not complying with the regulations. The maximum penalty would be 30 million Euro or 6 percent of the previous year’s turnover, whichever is the largest.
The AI Governance handbook: Your guide to achieving regulatory excellence
Questions to consider
A few things will require clarification or evolve as AI and its usage increase.
- First, the document distinguishes between a provider and a user. However, over time, everyone using AI will become a provider, given that data and algorithms, and models become the core fabric of every organization. So, the distinction will make less and less sense.
- Secondly, monitoring and tracking will require two things that are not explicit in the rules. The first is clarity on who audits the risk level of the product or solution. The AI rules would benefit from having a third party that continuously audit the AI risk level of all AI models and algorithms across the organization. This is essential given the increasing complexity, breadth, and adaptability of AI in all organizations. The second is a requirement to install technology that tracks all AI models and algorithms. Only technology will be able to report on usage continuously. The specificity of the oversight and monitoring lacks in the current version of the rules, and a common EU approach to audit and monitoring would benefit all organizations.
- Third and last, the devil is always in the detail, and the challenge becomes determining the boundaries, for example, between high-risk and limited-risk AI products and solutions. Ultimately, this is why there is an organizational requirement to monitor and track AI products and solutions to deal with disputes. This is equally why the proposal from the EU deep in the document suggests that organizations adopt and voluntary, explicit, and transparent approach to external communication about their usage of AI.
The rules will evolve through the review and implementation process. And they are not perfect. However, they will impact every organization and will require executive teams to take action now. There are five key actions to consider.
The EU proposal for AI regulation is complex. It will have to balance the need for regulation with the requirement to create an AI-competitive Europe.
Five actions to consider
The board’s involvement
The EU proposal has made AI a question of risk or assessment of risk. Ultimately, it becomes the board’s responsibility or, in the case of public sector organizations, the political oversight function. Boards need to determine how to deal with AI risk, ethics, and governance continuously and who at the board level should assume ownership.
It is recommended for organizations with business in the EU to have the EU regulation and AI risk as an agenda item on the board agenda this year.
Business leader accountability
Given that AI implementations constantly evolve as more advanced models learn and morph in function and ultimately support humans, so must oversight. There will be a constant need for business leaders to monitor and understand AI products and solutions first and second-order impacts. Furthermore, given the increasing ubiquity of AI in organizations, this responsibility and accountability can’t be centralized but rests with all business leaders.
All organizations implementing AI need to commence an education of senior business leaders around the impact and their responsibilities concerning AI and AI risk on their function.
Particular action is required of the CIO, Chief Legal Officer, and Chief Human Resource Officer (CHRO). They must ensure a corporate-wide effort is initiated where managers are made aware of their ongoing responsibility for monitoring AI products and solutions.
- The Chief Legal officer will need to become familiar with the legal requirements, likely by involving outside counsel and inform the CEO and board of the requirements. Equally, he or she is required to ensure that the appropriate oversight processes are in place, likely in collaboration with the CIO and the CHRO.
- The CHRO needs to ensure that accountable managers are informed of their responsibilities and built into the organization’s training and development programs.
- Lastly, the CIO is responsible for the technical oversight of all AI models and algorithms in deployment. This will only be solved through a technical implementation that can monitor and ensure compliance is recorded of all the models deployed. If the organization has a Chief Data Officer, this responsibility may likely be best placed with this role.
As mentioned, ongoing monitoring becomes necessary immediately upon adoption in national legislation. And given the requirement to retain inventory and compliance of AI implementations, organizations will need to create a centralized repository of AI models and their behavior, ensuring that a record is kept of past activity for high-risk AI products and solutions. There is a complex twist to this. You will need to know whether AI environments that run outside the EU are used inside the EU and where the test data for the environment originated. For global organizations, this will pose an added complexity for their adoption of AI across their organizations. Continuous monitoring and compliance reporting are only achieved through a platform approach to AI models. Ensuring this overall monitoring and compliance requires collaboration between the Chief Legal Officer, the CIO, the head of Supply Chain, and the organization’s product development function.
Organizations need to create a recording system to track and monitor AI products and solutions and their compliance. The senior leadership team will need to address AI regulation this year as an agenda item at a leadership team meeting. And the outcome of the meeting must be to assign accountability within the organization.
The role of purchasing
AI models will eventually be found in all software and data products, and likely many physical products purchased, such as a car or medical equipment. Consequently, the role of Purchasing in organizations becomes a point of record and monitoring of product certification (such as the CE certification of AI-enabled products). The purchasing organization will need to be ready to capture the requirements of the new rules in contracts. The purchasing organization will equally become the point at which AI models, products with embedded AI are recorded in the asset registry of the organization. This is the point at which tracking of AI implementations needs to start.
Empower Purchasing to capture AI products and solutions into the company and build this into the asset registry of the organization.
Most AI solutions and products will fall into the categories of “limited” or “minimal” risk and will not require the same level of oversight as high-risk AI products and solutions. However, as the EU proposal also supports, proactive communication with the different stakeholders about the organization’s AI principles, especially in the actual usage of AI-enabled systems and products, is advisable. As outlined in our note about AI communications, all organizations would be well advised in informing their key stakeholders about their approach to AI. The leadership should create communication for the three essential stakeholder constituencies, customers, employees, and suppliers. Once the regulation is in place across the EU, this should be standard practice for all organizations. In addition to the three constituencies, it would also become essential to ensure that AI ethics and risk are part of the product descriptions for organizations that produce physical products with embedded AI.
Implement and communicate an AI charter for each of the three critical stakeholders of the organization, customers, suppliers, and employees.
The EU proposal for AI regulation is complex. The proposal has yet to be completed and a precise implementation timeline communicated. It will have to balance the need for regulation with the requirement to create an AI-competitive Europe. A point that critics of the recent document may point to is tilting too much towards regulation. However, what is clear is that this will become law in the EU. And as a result, this, no different from the GDPR, will require organizations to act. Organizations that act now will be better prepared from a regulatory perspective and highly likely to exploit the enormous business opportunities that AI delivers. This is a critical topic of action of CEOs in 2021.
Source: The Sondergaard Group
About the author
Chairman of the Board, 2021.AI
Peter Sondergaard is currently Chairman of the Board at 2021.AI and Owner of his Executive Advisory company, the Sondergaard Group. Before this, Peter worked as the Executive VP and member of Gartner’s operating committee for 15 years. Peter is a well known and sought out speaker covering many topics within IT, AI & ML.
You might also like…
It’s a relatively simple question, but the answer is not straightforward. First, perhaps unfairly, let’s broaden the question to cover other constituencies than just your customers....
The Ethical AI Newsletter
It’s not fake. It’s not artificial. It’s real news! Sign up for our Ethical AI newsletter and get the latest AI insights from our data science and AI experts.