AI Insights, OCTOBER 2020
Disparate impacts in AI implementations
Yina Moe-Lange
Author, The Big Y
Every day, more and more decisions are made across the enterprise, and many of these decisions are made by algorithms within AI systems. Humans can unconsciously bring biases into their decision-making, and we must ensure that these are not reflected or further emphasized in our AI decision-making process. AI has the potential to help reduce unfair biases and help us ensure responsibility in our decision-making to avoid disparate impacts.
What is a disparate impact?
Biases can be both intentional and unintentional, so we need to understand where they may pop up. One example of unintentional bias is commonly shown through disparate impact (also known as indirect discrimination). Disparate impact is when policies, rules, and outcomes disproportionately impact a specific group of individuals, though there are no underlying intentions to do so. Unintentional bias is particularly important to address as it can be quite harmful to different groups of people in our society.
Unintentional proxy discrimination
An example where there has been a long history of bias is within the financial services industry, specifically within the determination of creditworthiness. It is essential for financial institutions to determine consumer risk when pricing their financial service, many times looking deep into their customers’ history. Many countries and jurisdictions have laws that prevent financial institutions from basing their pricing policies on certain characteristics (race, gender, age, marital status, etc.). While the algorithm might not be using a gender variable directly in its decision-making process, it can still use gender proxies. A proxy for gender could be found in a person’s purchasing history, for example, the type of deodorant or razors purchased. Unintentional proxy discrimination should be monitored in these systems as well.
AI has the potential to help us to detect and correct these biases. Disparate impact can be addressed through disparate impact assessments and fairness assessments. Fairness can be challenging to define, so it is important to decide on different fairness metrics and standards for each AI project implemented.
Regulatory adherence
Addressing disparate impact and fairness is not only the right thing to do but is also essential in terms of regulatory adherence. Having a system in place that helps ensure you meet regulatory requirements can help with both reputational and business risks in the short and long term. It is essential to highlight the principal role of human involvement in the AI process. We are in a unique position to understand the nuances of the outcomes, the data collection process, and other steps in the AI model process. By working in tandem with the algorithms, there is a lower chance of bias making it through the system.
Tools & impact assessments
There are several top of the line open-source libraries that help assess and evaluate the presence and level of different biases in your models. At 2021.AI, we’ve implemented several of the best open source libraries into our GRACE AI platform. We want our clients to have options and pick the best one for what they are looking for. We think you should have access to state-of-the-art tools that are benchmarks in the AI ecosystem. A sample of the libraries we work with on the GRACE AI platform include:
- IBM’s AI Fairness 360 (AIF360)
- InterpretML
- Microsoft’s Fairlearn
Impact Assessments are one way we, at 2021.AI, are introducing more human oversight into the AI model process. They can help structure a process in which the expectations, implications, and potential outcomes are weighed during the development and deployment of an AI model. Asking a different set of questions depending on the AI model and data used can ensure a more transparent process and ensure that the project leaders are assessing and checking in at each stage of the model development and deployment process.
Combining the fairness tools with Impact Assessments can give a transparent overview of the AI models and their outcomes. Through informative questions and data pulled directly from the model outcomes, answers can be scaled and weighted to produce a multi-layered score. This combination can help make sure that a model’s goals line up with the outcome of the model. Measuring and analyzing the bias in models leads to greater accountability, which is a key element in more responsible decision making.
Being able to qualify and understand the presence of bias in your models and outcomes is the first step to reducing indirect discrimination. This removes one of the barriers to accountability and scrutiny of algorithms that can be difficult to carry out when working with opaque, “black box” style tools. Not only is it important to meet regulatory requirements, but having an unbiased and fair model can be better for business and ensure that you are optimizing your products more efficiently.
You might also like…
Responsible AI can help companies compete
Technology developers cannot predict the future, but they will have to look into the crystal ball of tomorrow…
The EU takes on Trustworthy AI with ALTAI
The High-Level Expert Group on Artificial Intelligence (AI HLEG) presented the final Assessment List for Trustworthy…