ABOUT AI BEHAVIOR AND MODEL RETRAINING

As recent AI experiments by Google and MIT demonstrate how easy it is to bias an AI system, should ethics prevail over accuracy when determining the optimal model?

AI algorithms are entering our daily lives more than ever before and today such algorithms are found everywhere around us. From the movie suggestions on Netflix to driving some of our cars. The fact that we are surrounded by this technology suggests the importance of algorithm maintenance and monitoring. In a recent article, Google announced that it had to tweak some of its smart email reply models as they frequently kept suggesting phrases like “I love you” or “sent from my iPhone”.

Google’s Smart Reply allows you to answer incoming emails faster

Google’s Smart Reply allows you to answer incoming emails faster

You see the algorithms used to generate the output/suggestions, do not have the ability to know any social context yet, they merely mimic our own behavior in the training data that we showed it.

For example, “Smart Reply”, while being still very impressive, did not capture the clever branding done by Apple with “sent from my iPhone”, which is automatically generated on your iPhone, it thought this was a perfectly normal reply since it’s very frequent in the training data. The same applies to “Norman”, a deep neural network, created by MIT, that became very creepy when trained on data from dark pages on Reddit. “Norman” and a standard AI algorithm were asked to interpret some abstract images, and the results were phenomenal. The point of the experiment was to demonstrate how easy it is to bias any AI system if it is trained on biased data.

Abstract image interpretations by AI. MIT’s disturbed AI (left) versus a standard AI (right)

Abstract image interpretations by AI. MIT’s disturbed AI (left) versus a standard AI (right).

Like raising a child, these smart algorithms must be closely watched, retrained and developed by humans to capture context and perhaps one day ethics as well. It is also important to understand that these models do have the ability to improve in three ways or a combination of those, either we use them more to capture our actual behavior, train them on unbiased datasets or manually tweak the mathematics to include the concept of social context for example. This subject also highlights the issue of understanding the underlying decisions taken by the AI system.

Conclusions

MIT’s creepy Norman is nothing more than a thought experiment and Google’s Smart Reply is somewhat a smaller issue to solve, but what if you’re a non-Caucasian and AI predicts that you will commit a crime because of that fact. As data scientists we constantly work towards improving the accuracy of models but in recent years the demand for transparency in the automated decision making is driving us to improve our understanding and skills in model maintenance and thereby creating more accurate and reliable AI.

About the author

Ahmed Zewain

Ahmed Zewain

Data Scientist, 2021.AI

Ahmed is a Data Scientist at 2021.AI with an MA in mathematical modeling and computing, and extensive knowledge of several data engineering tools. Ahmed’s skills include building ML POC projects and taking them further into production for a wide variety of clients.

You might also like…

Dansk startup bag data science-platform: Virksomhedernes umodenhed er en udfordring

Mange selskaber er endnu ikke på et stadie, hvor de har brug for en platform til data science…

READ MORE

Data Science Landscape

Data Science landscape

Data Scientists are currently the driving force in many companies’ AI strategy, and the demand for them has…

READ MORE

The Responsible AI Newsletter

Get the latest know-how from those in the know. Sign up for our Responsible AI Newsletter and receive the latest insights from our experts.