
When it comes to facial technology and its controversies, it is clear that something needs to be done. Applying facial recognition models to identify and track citizens and allowing apps that can recognize anyone ever to post a picture online is unacceptable. But can we keep this technology in check?
When used for good, facial recognition technology helps us log into our phones, go through passport control scanners at airports, or identify a single terrorist from the 1000 security cameras within a given city.
Facial recognition technology becomes problematic when biometric profiles are built of every individual and used for mass surveillance, racial profiling, or violations of human rights and freedoms.
In light of the recent protests worldwide, organizations like Amazon, IBM, and Microsoft have withheld selling these technologies to law enforcement, recognizing that citizens are subject to privacy erosion, bias, or at risk of being misidentified.
A ban on all facial recognition technology is not the solution. This will just spawn Shadow AI and make it impossible to use biometric technology for the good uses mentioned prior.
To tackle the issues concerning facial recognition technology, we need laws and regulations governing its use, and we need them fast.
At 2021.AI, we think governments around the world should start applying oversight and regulation to models using facial or other biometric technology, making sure that:
And that all of this is done on a detailed and continuous basis.
2021.AI’s GRACE AI Platform provides an easy way for companies or governments to create tangible metrics for fairness, transparency, and explainable AI. We link these metrics to your impact and risk assessments, and effectively measure these metrics continuously, while automatically restricting model behavior.
2021.AI effectively creates a turn-key solution to AI Governance and Compliance, that needs little work effort once applied to your project or model.
Stay up to date on our latest news and industry trends