Executive Insights, Updated 2023

AI Governance – What are the KPIs? And who is accountable?



Everywhere you turn in the coverage of AI; you see the term AI Governance. What is interesting about this coverage is that three simple, but critical elements seem to be missing from this coverage. First, rarely is the term defined. While we all may believe we know what the word governance means, what does AI Governance mean? Most of the conversation centers on public policy around AI Governance and as such, not fully applicable to businesses. Secondly, there is very little coverage of which executives or functions in organizations are responsible for what aspects of AI Governance. To execute a policy, you need accountability, meaning you need to be clear on who is responsible. Third, too many aspects of AI governance are not measurable, or there is no consideration given to how to measure and factually track AI Governance. Each of these three elements of AI governance needs focus on all organizations. Let’s explore each of these.

What is AI Governance?

The word “governance” has, in the last thirty years, become embedded in standard business vocabulary but often used for slightly different things with somewhat different meanings. Generally, governance is something that includes all processes of governing. It is the way rules or actions are structured, maintained, and regulated – and often how accountability is assigned. AI governance can be said to cover this description, as well. AI governance is about AI being explainable, transparent, and ethical. However, those three words mean different things to different organizations or functions within organizations, which results in slightly different definitions or descriptions what of AI governance is. For example, there is a difference between a technical and a policy (or legal) definition of “transparency”. A technical definition will certainly include details on the software code and on individual data components, which a policy definition may not. Equally, the words “explainability” and “transparency” can include the ability to measure cost, value, auditability, and compliance. Still, most approaches to AI governance leave out at least one of these, or each very often lacks specificity on all. And then lastly, there is how AI governance interacts with everything around it. A governance policy needs to exist and interact with the world around it or within the organization; it is intended to serve. Most AI governance descriptions leave this out. Some definitions, like the Singaporean government’s version, make it explicit that an AI solution needs to be human-centric, thereby reducing the universe in which the AI governance serves. But that may not solely be the role of AI governance. It also needs to fit the overall corporate governance framework and may, at times, need to be machine-centric and not optimize around humans.

The key for all organizations is that defining AI governance needs to be done carefully and with significant thought. The responsibility of this has to be with the CEO. An essential part of that is to ensure that accountability for AI governance is crystal clear and that the key elements are measurable. Let’s look at each of these now, starting with “”should be responsible for AI governance?”.

Who should be responsible for AI Governance?

That is a complicated question because there is not a single person solely responsible today, and the answer will vary based on an organization’s definition of AI governance. With data critical to all business functions, customer engagement, products, and supply chains, every leader needs to be knowledgeable about AI. AI Leadership is a new skill critical to all leaders (see Seven New Leadership Skills in the Digital Economy), and AI governance is an element of that. So, it is clear that AI governance needs to be relevant and applicable to all leaders in an organization. However, there are also functional roles that are important in the enforcement and continued evolution of an organization’s AI governance.

First, the CEO or senior leader in a government institution needs to be the person ultimately accountable for the AI Governance charter as well as a clear assignment of responsibility within the organization. Secondly, if data truly is a competitive asset, then responsibility for control and possibly audit, will fall on the board of the organization. This responsibility must involve the audit committee. Third, several specific functional roles within an organization must be included in AI governance. The General Counsel, who would have the responsibility for the legal and risk aspects. The CFO, for the cost and financial risk elements. And then, I would strongly argue for the role or responsibility of a chief data officer (CDO) maintaining and coordinating an ongoing evolution of the organization’s AI governance. Other functional executive roles could have specific responsibilities around AI governance, but this is where AI, including AI governance, becomes a general leadership skill that every manager in an organization needs to master.

Assigning responsibility for AI governance is essential. Without clear responsibilities, no one is accountable. If no one is responsible, the approach to AI governance will be mediocre, at best, more likely it will fail. In particular, because AI and the underlying data of the organization will always be changing or morphing, as the organization becomes more and more successful using data strategically and AI in all processes and products. CEOs must take their role in AI governance. Equally, it is plausible that governments become more involved through regulation and legal requirements and that audit firms become the third-party entity involved in the continuous review of AI governance.

How should AI Governance be measured?

However, in addition to assigning responsibilities, you also need to be able to measure AI governance. As the old saying goes, “What you can’t measure, you can’t manage”. Most coverage of AI governance skirts the issue of measurement, with mentions in documents such as Singapore’s “Model Artificial Intelligence Governance Framework”. But even here, there are no specific recommended measures of AI Governance. Lack of these measures – of facts – will be a weakness in any organization because these cannot be transferred to and incorporated into processes, systems, and platforms. Deploying AI governance as an element of a software environment will become critical for all organizations.

The question becomes which measures are relevant. The answer to that requires an organization to first be clear on the first two sections above. The definition of AI governance specific to the organization, and secondly, who in the organization is accountable and what are they responsible for. Some measures or metrics for AI governance will become standard for all organizations either through regulation or through market forces. Also, organizations need to consider other measures that will support their strategic direction as well as how the company runs on a day by day basis. Some of the facts and data-driven KPIs organizations should consider are:

Measuring AI Governance

Measuring AI Governance

  • Data: Measures for the lineage of the data or the provenance of the data. As well as measures around the quality of the data.
  • Security: Data feeds around the security of the models and their usage. Allowing for an understanding of tampering or improper usage of the AI environments. Ultimately some of this can be subsumed in future blockchain versions of AI environments.
  • Cost/Value: Ideally, there should be KPIs for the cost of data or even the value created by the data and the algorithm.
  • Bias: KPIs that can show selection bias or measurement bias are required. What is certain is that organizations need to monitor bias through direct or derived data continuously. Furthermore, it will also be possible to create KPIs that measure or at least approximate information on ethics.
  • Accountability: Clarity of responsibility and thereby involvement of individuals, when they used the systems and for what decisions.
  • Audit: Continuous collection of data, that in aggregate could form the basis for audit trails and review of the system periodically. Thus, paving the way for third parties, such as audit firms or the software platform itself, to perform audits continuously.
  • Time: Measures of time should be part of all the KPIs, allowing for a better understanding of impact over time.

There are many other measures or KPIs for organizations to consider. Critical is that they consider implementing this now and committing to nurturing and evolving the measurement environment. And then last start to build this into the software. AI governance should be part of all AI and machine learning environments.

Organizational AI Governance now!

It is paramount that CEOs take responsibility for creating their organization’s AI governance charter. And that part of that effort includes assigning accountability and creating measures or KPIs for the organization. Without a holistic approach to AI governance, AI will fail in your organization.

Peter Soendergaard

Peter Soendergaard


Peter is Chairman of the Board at 2021.AI and the Owner of Executive Advisory company, the Sondergaard Group. Before this, Peter worked as the Executive VP and member of Gartner’s operating committee for 15 years. Peter is a well-known and sought out speaker covering many topics within IT, AI & ML.

You might also like…

Why AI Governance is critical for Automation Projects

AI is increasingly becoming the focal point of RPA, but RPA initiatives lack a focus on AI governance. That is a potential…


AI Governance – The impact on different business functions

The assignment of responsibilities regarding AI within organizations remains an evolving topic but critical to AI Governance…