Multinational AI Governance – The threat of "fragmentation of AI"

Governments and multinational institutions are racing to create principals, laws, and regulations for artificial intelligence and machine learning. However, approach and focus generally diverge in multiple areas, and as a consequence, multinational organizations will need to consider how to manage multinational AI Governance.

PETER SONDERGAARD | CHAIRMAN OF THE BOARD, 2021.AI | THE SONDERGAARD GROUP | MAY | 2020

The challenge will be the inequitable speed of change, and the varying balance or focus countries adopt between, first protecting citizens, organizations, and society, secondly the need to foster innovation and competition and lastly, national security. The emphasis on these three elements will vary across the globe and will not remain constant either, as we iterate asymptotically towards a single global view of AI governance.

We can learn from the evolution of data privacy policies and regulations and how data-as-a-business, has been implemented and is viewed across the globe. We can generally assume that the different government approaches to data privacy also will influence the path to AI regulation and laws.

The US will be more weighted towards businesses, Europe (and Japan) more towards the protection of citizens and organizations, and China more weighted towards supporting national policy. As said, different country policies and regulations will tend to move asymptotically towards a single global view of AI governance.

However, what likely may influence all three, will be national security interests and the assumption that artificial intelligence (together with technologies like Quantum Computing) constitute a vital national interest. National security interests may cause fragmentation of certain aspects of AI and ML, including policy and legislation. It may even lead to the potential rise of national-only software companies, somewhat reminiscent of the 1960s and 1970s with national hardware providers and service bureaus serving national governments.

Organizations with multinational presence must assume that legislation around artificial intelligence across the globe is emerging but at an uneven pace: In the major economies, there has been for the last two years several efforts of exploratory efforts around policy and legislation. This is now, like in the EU, turning into actual policy and soon to be legislation.

However, real policy and regulatory measures will vary substantially. Organizations, therefore, need to ensure global accountability for AI governance is in place and that it covers multinational regulatory and legal environments.

How to deal with the regulatory and legal environments

Here are a few recommendations for how multinational organizations deal with this legal and regulatory uncertainty.

Establish corporate guidelines

AI Governance is a board issue and more urgent in multinational organizations. The board and the CEO need to be active participants in establishing corporate guidelines for AI Governance.

Roles and responsibility

Every organization must have a single point of coordination around AI governance, especially in multinational organizations. A Chief Data Officer or the Chief Legal Officer are two roles that can be responsible for the global coordination.

Global technology platform

The organization will need a global technology platform to manage all algorithms and the associate governance of AI. Without this, the likelihood of internal “fragmentation of AI” becomes disruptive to the organization as it will lack a consistent position with clients, suppliers, and employees around AI.

Legal functions and capabilities

The legal function needs to upgrade capabilities to be able to manage issues of regulatory and judicial matters of AI and machine learning globally. This needs to be done through internal specialization in the legal department or working with external partners to gain ongoing input on changes in policies and laws.

AI policies and employees

The HR organization needs to acquire knowledge of how AI policies impact employees and their jobs or tasks differently across the legislative environments. A central point of expertise in the HR department will ensure consistent locally aware, global policies.

Impact on products, services, and clients

The Sales and Customer Service organizations will need to understand different legislative environments and AI governance that may impact the products and services and, ultimately, the interaction with the clients.

As AI and machine learning increasingly face more but different regulations globally, multinational organizations need to increase their preparedness. Without this, internal as well as external challenges around AI Governance will persist for the next couple of years.

Source: The Sondergaard Group

About author

Peter Soendergaard

Peter Soendergaard

Chairman of the Board, 2021.AI

Peter Sondergaard is currently Chairman of the Board at 2021.AI and Owner of his Executive Advisory company, the Sondergaard Group. Before this, Peter worked as the Executive VP and member of Gartner’s operating committee for 15 years. Peter is a well known and sought out speaker covering many topics within IT, AI & ML.

You might also like…

The Ethical AI Newsletter

It’s not fake. It’s not artificial. It’s real news! Sign up for our Ethical AI newsletter and get the latest AI insights from our data science and AI experts.

Book a demo