AI Insights, July 2020

Facial recognition technology: We must welcome regulations

Rasmus Hauch

CTO, 2021.AI

When it comes to facial technology and its controversies, it is clear that something needs to be done. Applying facial recognition models to identify and track citizens and allowing apps that can recognize anyone ever to post a picture online is unacceptable. But can we keep this technology in check?

The issues with facial recognition technology

When used for good, facial recognition technology helps us log into our phones, go through passport control scanners at airports, or identify a single terrorist from the 1000 security cameras within a given city.

Facial recognition technology becomes problematic when biometric profiles are built of every individual and used for mass surveillance, racial profiling, or violations of human rights and freedoms.

In light of the recent protests worldwide, organizations like Amazon, IBM, and Microsoft have withheld selling these technologies to law enforcement, recognizing that citizens are subject to privacy erosion, bias, or at risk of being misidentified.

Finding a way forward

A ban on all facial recognition technology is not the solution. This will just spawn Shadow AI and make it impossible to use biometric technology for the good uses mentioned prior.

To tackle the issues concerning facial recognition technology, we need laws and regulations governing its use, and we need them fast.

At 2021.AI, we think governments around the world should start applying oversight and regulation to models using facial or other biometric technology, making sure that:

  • Impact and Risk assessments are done for each use of such models, ensuring that technology is applied with the proper approvals and oversight.
  • Security/GDPR in all aspects are maintained and respected.
  • Source data is clearly versioned, and data governance is applied.
  • Bias is clearly measured and monitored at each phase of the model development lifecycle.
  • That external peer reviews have been done.
  • That proper testing is executed.

And that all of this is done on a detailed and continuous basis.

2021.AI’s GRACE AI Platform provides an easy way for companies or governments to create tangible metrics for fairness, transparency, and explainable AI. We link these metrics to your impact and risk assessments, and effectively measure these metrics continuously, while automatically restricting model behavior.

2021.AI effectively creates a turn-key solution to AI governance and compliance, that needs little work effort once applied to your project or model.

References

  1. http://gendershades.org/overview.html
  2. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
  3. https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recog+nition/
  4. https://www.theverge.com/2020/6/8/21284683/ibm-no-longer-general-purpose-facial-recognition-analysis-software
  5. https://www.nytimes.com/2020/06/09/technology/facial-recognition-software.html
Rasmus Hauch

Rasmus Hauch

CTO, 2021.AI

Rasmus has an abundance of experience in positions like Program Manager, Lead Architect, and Senior Consultant for various international Financial, Energy, and Telecom customers. His skills include leadership, mentoring, and enterprise architecture.

You might also like…

Disparate impacts in AI implementations

Every day, more and more decisions are made across the enterprise, and many of these decisions are made by algorithms…

READ MORE

The EU takes on Trustworthy AI with ALTAI

The High-Level Expert Group on Artificial Intelligence (AI HLEG) presented the final Assessment List for…

READ MORE