
For a couple of years, the EU Commission has worked on rules, regulations, and incentives around Artificial Intelligence. On April 21st, it released the long-awaited proposal for harmonized regulations on AI. The proposal to be adopted in national legislation is a far-reaching set of rules that will impact all organizations, at par and perhaps even more comprehensive, than GDPR. Understanding the basics of the new rules and the organizational impact is a requirement for all senior executives.
The announcement includes a large number of new requirements for organizations, whether providers or users of AI. Not all the specific actions for organizations are immediately apparent in the EU Commission’s summary nor clear from the press coverage following the announcement. However, the main points of the announcement are:
The central and critical element of the announcement is the splitting of AI solutions and products into four categories according to risk. The proposal defines AI risk as to the ability to consciously and unconsciously manipulate humans, violate people’s rights, pose a danger to people or enable social scoring or segmentation. The risk categories are:
A product or service may move with risk categories over time, meaning that the placement in a risk category is not fixed.Application of AI rulesThe new rules apply to any organization, provider, or user offering an AI product or solution even if the solution runs on a platform outside the EU but is offered virtually within an EU country. The rules also apply to machinery with AI embedded. (This has led to enhancements in the EU rules applying to machinery products). The EU rules for AI are expected will evolve and will be governed by national and EU-level oversight boards.Monitoring and trackingThe rules require providers of products and services that incorporate AI to monitor and track the data, functionality, and intent and do so over time. IN other words, track the AI risk level. However, the risk categorization of an AI product or solution is not static and may evolve. This may happen as the product or solution evolves either through changes made by the provider or as the user of the product or service incorporates new data sets or AI functionality. Tracking high-risk AI solutions and products will happen at the national level and be included in an EU central database. As part of this tracking, high-risk AI products and services will be certified through a CE mark. Thus, allowing users to understand the AI risk of products and solutions purchased more clearly.Implementation in legislationThe proposal from the commission will now need to be discussed, possibly amended a few places, and then adopted and incorporated into national legislation. However, the timing of this remains a little vague, albeit not far into the future. Once that happens, organizations with business in the EU countries will need to comply with the rules and put the responsibility on leaders in every organization.PenaltiesThe rules carry the potential for significant penalties for not complying with the regulations. The maximum penalty would be 30 million Euro or 6 percent of the previous year’s turnover, whichever is the largest.Questions to considerA few things will require clarification or evolve as AI and its usage increase.
The rules will evolve through the review and implementation process. And they are not perfect. However, they will impact every organization and will require executive teams to take action now. There are five key actions to consider.
The EU proposal for AI regulation is complex. It will have to balance the need for regulation with the requirement to create an AI-competitive Europe.
The EU proposal has made AI a question of risk or assessment of risk. Ultimately, it becomes the board’s responsibility or, in the case of public sector organizations, the political oversight function. Boards need to determine how to deal with AI risk, ethics, and governance continuously and who at the board level should assume ownership.
Action:
It is recommended for organizations with business in the EU to have the EU regulation and AI risk as an agenda item on the board agenda this year.
Given that AI implementations constantly evolve as more advanced models learn and morph in function and ultimately support humans, so must oversight. There will be a constant need for business leaders to monitor and understand AI products and solutions first and second-order impacts. Furthermore, given the increasing ubiquity of AI in organizations, this responsibility and accountability can’t be centralized but rests with all business leaders.
Action:
All organizations implementing AI need to commence an education of senior business leaders around the impact and their responsibilities concerning AI and AI risk on their function.
Particular action is required of the CIO, Chief Legal Officer, and Chief Human Resource Officer (CHRO). They must ensure a corporate-wide effort is initiated where managers are made aware of their ongoing responsibility for monitoring AI products and solutions.
As mentioned, ongoing monitoring becomes necessary immediately upon adoption in national legislation. And given the requirement to retain inventory and compliance of AI implementations, organizations will need to create a centralized repository of AI models and their behavior, ensuring that a record is kept of past activity for high-risk AI products and solutions. There is a complex twist to this. You will need to know whether AI environments that run outside the EU are used inside the EU and where the test data for the environment originated. For global organizations, this will pose an added complexity for their adoption of AI across their organizations. Continuous monitoring and compliance reporting are only achieved through a platform approach to AI models. Ensuring this overall monitoring and compliance requires collaboration between the Chief Legal Officer, the CIO, the head of Supply Chain, and the organization’s product development function.
Action:
Organizations need to create a recording system to track and monitor AI products and solutions and their compliance. The senior leadership team will need to address AI regulation this year as an agenda item at a leadership team meeting. And the outcome of the meeting must be to assign accountability within the organization.
AI models will eventually be found in all software and data products, and likely many physical products purchased, such as a car or medical equipment. Consequently, the role of Purchasing in organizations becomes a point of record and monitoring of product certification (such as the CE certification of AI-enabled products). The purchasing organization will need to be ready to capture the requirements of the new rules in contracts. The purchasing organization will equally become the point at which AI models, products with embedded AI are recorded in the asset registry of the organization. This is the point at which tracking of AI implementations needs to start.
Action:
Empower Purchasing to capture AI products and solutions into the company and build this into the asset registry of the organization.
Most AI solutions and products will fall into the categories of “limited” or “minimal” risk and will not require the same level of oversight as high-risk AI products and solutions. However, as the EU proposal also supports, proactive communication with the different stakeholders about the organization’s AI principles, especially in the actual usage of AI-enabled systems and products, is advisable. As outlined in our note about AI communications, all organizations would be well advised in informing their key stakeholders about their approach to AI. The leadership should create communication for the three essential stakeholder constituencies, customers, employees, and suppliers. Once the regulation is in place across the EU, this should be standard practice for all organizations. In addition to the three constituencies, it would also become essential to ensure that AI ethics and risk are part of the product descriptions for organizations that produce physical products with embedded AI.
Action:
Implement and communicate an AI charter for each of the three critical stakeholders of the organization, customers, suppliers, and employees.
The EU proposal for AI regulation is complex. The proposal has yet to be completed and a precise implementation timeline communicated. It will have to balance the need for regulation with the requirement to create an AI-competitive Europe. A point that critics of the recent document may point to is tilting too much towards regulation. However, what is clear is that this will become law in the EU. And as a result, this, no different from the GDPR, will require organizations to act. Organizations that act now will be better prepared from a regulatory perspective and highly likely to exploit the enormous business opportunities that AI delivers. This is a critical topic of action of CEOs in 2021.
Stay up to date on our latest news and industry trends