Structural bias in AI models
Twitter recently came under fire due to structural bias in their cropping algorithm used to crop photos for the Twitter feed. Twitter optimizes the cropping of images displayed on the feed so that what the thumbnail version of a photo is assumed to be the most interesting part of the picture. Unfortunately, the algorithm seemed to focus on white faces over black faces consistently when executing the cropping. This consistent choice shows that there was bias built into the algorithm that Twitter had not yet addressed.
This is not the first time we’ve seen bias reflected in AI model outcomes, but in this instance the situation was a bit different than usual. The cropping algorithm was not based on a facial recognition algorithm, but rather Twitter was using a saliency prediction algorithm.
In short, saliency prediction is a model that predicts where on an image, your eyes will fixate. A saliency model will create a saliency map, which is a grey-scale pixel map of an image that shows the original image’s visual attention attributes. The “interesting” areas of an image will appear lighter on the scale, and the less “interesting” areas will appear darker. Saliency mapping and prediction are useful across many diverse applications, including robotics for object detection and customer attention marketing.
Now, back to the Twitter dilemma. This isn’t a case of the training data not representing the population or the presence of bias. Instead, the bias here was a byproduct of engineering, making it a structural and systemic issue. It’s clear this system wasn’t intended to be biased, but when nobody verified it for bias, it continues to perpetuate the issue, and that is a problem that needs to be addressed.
A key concern is that this kind of structural bias may show up in the results of other algorithms and approaches that we have not yet caught. This issue highlights the importance of testing and monitoring the inputs and outputs of the various models we put into production.
No easy fixes
A major challenge with this structural bias is that there is no quick fix.
Twitter has decided to update how it crops the photos uploaded to the Twitter feed, mainly moving away from auto-cropping. It’s good to give the user back some control over the cropping, but it also shows that teaching the ML algorithm to be unbiased is a larger problem to be solved. It’s true that AI is not always the right choice, and in this situation, the fix of an ML problem was to remove it and let humans do the job.
It is always important to weigh the decision to implement AI solutions. It is essential to ensure that the technology implemented is ready and that the task isn’t better off in humans’ hands until it is up to par with the standards we expect. At 2021.AI, Impact Assessments are one way we apply more human oversight into the AI model process to help structure a process in which the expectations, implications, and potential outcomes are weighed during the development and deployment of an AI algorithm.
For more on how 2021.AI fights bias in AI algorithms read our last blog on Disparate Impacts
As with much bias that shows through in AI applications, it seems as if this Twitter mistake was unintentional and goes to show that we need to be aware of bias throughout the entire lifecycle of model development and deployment. It is easy to blame biased outcomes on biased data, but it is also important to understand how algorithms might also perpetuate bias.
About the author
PRODUCT MANAGER, 2021.AI
Yina is a Product Manager at 2021.AI working to bring Responsible AI to every enterprise. She has experience working with AI platforms and investing in early-stage startups. Yina is also the author of the newsletter, The Big Y, where she focuses on interesting and relevant AI topics.
You might also like…
Every day, more and more decisions are made across the enterprise, and many of these decisions are made by algorithms within AI systems. Humans can unconsciously bring biases into their...
When it comes to facial technology and its controversies, it is clear that something needs to be done. Applying facial recognition models to identify and track citizens and allowing apps...
The Ethical AI Newsletter
It’s not fake. It’s not artificial. It’s real news! Sign up for our Ethical AI newsletter and get the latest AI insights from our data science and AI experts.