Executive Insights, October 2020

Fairness in AI

Mikael Munck

Founder & CEO, 2021.AI

Bias in AI is and remains an increasing phenomenon. To overcome bias, you need to apply the aspect of fairness. This article will examine the legal frame, looking into specific nuances of fairness in AI. It is not an exhaustive analysis, but rather an entry point to a complex problem with plenty of opportunities and challenges.

The fairness of AI algorithms is a growing field of research that arises from the general need for decisions to be free from bias and discrimination. Fairness also applies to AI-based decision tools, where the European White Paper on AI provides a framework in which AI or algorithmic decision-making needs to be carefully considered.

For the sake of simplicity, let us use a hypothetical case: an AI model used by a bank to predict whether or not an individual will receive a loan based on the risk of default. Some critical elements in the European White Paper on AI play together when this type of AI model assesses an individual, namely: the person’s rights not to be subject to an automated decision in the first place, their right to get an explanation of the decision, and their right to non-discrimination.

This frame requires AI practitioners to produce models and workflows that – by design – take care of possible discrimination (fairness), are explainable to the user with a high degree of clarity (interpretability), and are reproducible through the whole AI model workflow (transparency). Examples of research efforts and products in this direction can be found at Google and IBM (see references).

In the described scenario, it is necessary to consider different definitions of fairness to evaluate the decision, and one would need adequate information about the AI model to analyze it. In this case, a set of questions can help highlight some aspects of fairness and its importance without the specifics.

Will excluding the individual’s protected attributes avoid discrimination?

An individual’s protected attributes include demographics such as sex, race, ethnic or social origin, genetic features, language, religion or belief, political opinion, disability, age, sexual orientation, and so on. It is straightforward to see that a decision based on only one of those features is indeed discriminatory. However, eliminating them from the analysis does not offer a solution. In our bank loan case, assume that these features are excluded, but the person’s home postal code is used instead. In a neighborhood where almost all residents belong to a single group (e.g., single ethnicity), an algorithm trained with the postal code could make decisions informed by group membership. Additionally, other groups located in the same geographic region (postal code) can risk misestimation if there is no other characteristic that can differentiate them.

Excluding an individual’s protected attributes is not enough to guarantee the equity of the decision and, moreover, might be the key to algorithm analysis.

Is model explainability enough?

Now consider the case in which the bank omits some personal attributes that differentiate a specific individual/minority with low risk of default, e.g., good credit history. They decide to use coarse characteristics that are not traceable to the individual, like postal code. In a neighborhood that historically and on average has more unpaid or defaulted loans than other neighborhoods, minorities will likely have their risk overestimated (represented by the postal code in a high-risk neighborhood). Moreover, this will not be perceptible if the data used is unavailable, and the variables that can differentiate the individual/minority from the coarse group are not present in the data.

Do large amounts of data help guarantee non-discrimination?

AI models are built on data that exhibits the biases in past decisions, and therefore the data used for training greatly influences the outcome. Typically a case-by-case analysis is required to avoid potential shortcomings.

What actions can I take to ensure fairness in AI models?

All organizations considering AI should start with a holistic view of both data and AI models. In such a view, governance is a key component. To avoid discrimination or bias in outcomes, consider validating your data by implementing an AI Platform. A platform also provides validated fairness and transparency. It is one thing to say that an AI practice is fair or ethical, and another to prove it. When your transparency is validated, there is a trace of the actual events that occur, and you can easily answer the question, “how did you do that?”

A platform will monitor AI models in the development and deployment stages, and only in an AI platform with AI Governance support, can you fully validate and document model processes to ensure that they meet the requirements under AI regulations.


Fairness is a complex issue that requires work on its definitions and its impact on the AI models produced. The absence of a one-box solution requires a great deal of analysis of the data used to train the model and its effects. Fairness is highly connected to the the AI models’ interpretability and the transparency of the process with which those models were created. AI governance support through an AI platform is a great way to gain that high level of interpretability and transparency, and confirm that AI models in production correspond with regulations set by the EU. The idea of fairness in AI is currently an active field of research as practitioners question the implications of using AI to aid in the decision-making process.


  1. B. Goodman and Seth Flaxman. European union regulations on algorithmic decision-making and a “right to explanation”. https://arxiv.org/abs/1606.08813
  2. https://ec.europa.eu/info/law/law-topic/data-protection/reform/rights-citizens/my-rights/can-i-be-subject-automated-individual-decision-making-including-profiling_en
  3. https://ec.europa.eu/info/aid-development-cooperation-fundamental-rights/your-rights-eu/know-your-rights/equality/non-discrimination_en
  4. https://developers.google.com/machine-learning/fairness-overview/
  5. https://cloud.google.com/inclusive-ml/
  6. https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
  7. Sam Corbertt-Davies and Sharaf Goel. The measure and mismeasure of Fairness: A critical review of Fair Machine Learning. https://arxiv.org/abs/1808.00023

  1. This case is used in [2] to describe the individual’s rights under GDPR.
  2. General Data Protection Regulation (GDPR) [2]
  3. Article 21 of the Charter of Fundamental Rights [3]
  4. Multiple definitions of fairness and their shortcoming can be found in [7]
  5. The protected attributes of an individual are given in [3], namely: sex, race, color, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation.
  6. The actual intention of this hypothetical case is left out of the discussion, it could be in experience or actual malice to reduce the number of loans. This is irrelevant for the requirements imposed by “interpretability” of the decision.
Mikael Munck

Mikael Munck

Founder & CEO, 2021.AI

Mikael is the founder and CEO of 2021.AI. He has 25+ years of experience in Technology and Financial sectors. Mikael was the Global Head of Technology and Operations at Saxo Bank. He is the Chairman of Copenhagen FinTech, investor and board member of several other technology companies.

You might also like…

Europe needs to take the global AI-leadership

The role of emerging technologies such as robotics, blockchain, artificial intelligence (AI), quantum computing…


COVID-19 and why public datasets are so important

Denmark introduces mandatory legislation for AI and Data Ethics

As the first country in the world, Denmark will introduce mandatory company legislation for AI and Data Ethics…