ABOUT AI BEHAVIOR AND MODEL RETRAINING
As recent AI experiments by Google and MIT demonstrate how easy it is to bias an AI system, should ethics prevail over accuracy when determining the optimal model?
AHMED ZEWAIN | JUNIOR DATA SCIENTIST | OCTOBER | 2018
AI algorithms are entering our daily lives more than ever before and today such algorithms are found everywhere around us. From the movie suggestions on Netflix to driving some of our cars. The fact that we are surrounded by this technology suggests the importance of algorithm maintenance and monitoring. In a recent article, Google announced that it had to tweak some of its smart email reply models as they frequently kept suggesting phrases like “I love you” or “sent from my iPhone”.
Google’s Smart Reply allows you to answer incoming emails faster
You see the algorithms used to generate the output/suggestions, do not have the ability to know any social context yet, they merely mimic our own behavior in the training data that we showed it.
For example, “Smart Reply”, while being still very impressive, did not capture the clever branding done by Apple with “sent from my iPhone”, which is automatically generated on your iPhone, it thought this was a perfectly normal reply since it’s very frequent in the training data. The same applies to “Norman”, a deep neural network, created by MIT, that became very creepy when trained on data from dark pages on Reddit. “Norman” and a standard AI algorithm were asked to interpret some abstract images, and the results were phenomenal. The point of the experiment was to demonstrate how easy it is to bias any AI system if it is trained on biased data.
Abstract image interpretations by AI. MIT’s disturbed AI (left) versus a standard AI (right).
Like raising a child, these smart algorithms must be closely watched, retrained and developed by humans to capture context and perhaps one day ethics as well. It is also important to understand that these models do have the ability to improve in three ways or a combination of those, either we use them more to capture our actual behavior, train them on unbiased datasets or manually tweak the mathematics to include the concept of social context for example. This subject also highlights the issue of understanding the underlying decisions taken by the AI system.
MIT’s creepy Norman is nothing more than a thought experiment and Google’s Smart Reply is somewhat a smaller issue to solve, but what if you’re a non-Caucasian and AI predicts that you will commit a crime because of that fact. As data scientists we constantly work towards improving the accuracy of models but in recent years the demand for transparency in the automated decision making is driving us to improve our understanding and skills in model maintenance and thereby creating more accurate and reliable AI.
Data Scientist, 2021.AI
Ahmed Zewain is a Data Scientist at 2021.AI with an MA in mathematical modeling and computing, and extensive knowledge of several data engineering tools. Ahmed’s skills include building ML POC projects and taking them further into production for a wide variety of clients.
You might also like…
Putting AI models in production is notoriously difficult. The challenges have many different nuances and summarising those in one short blog post is ambitious. Instead, this blog post shortly summarizes...
Join our newsletter
It’s not fake. It’s not artificial. It’s real news! Sign up to our newsletter and get the latest AI insights from our data science and AI experts on how to get real value from AI.
*By subscribing I agree to receive news and updates from 2021.AI