Moving from AI Ethics to AI Policy

In recent years, AI has evolved from science fiction to part of our everyday lives. Emerging tech is on the cusp of revolutionizing global value chains. Shaping a “new corporate tomorrow” is already taking place, and the ethical issues related to AI have been laid out in numerous executive debates. To secure democratic values and a high standard of transparency, we must take AI Ethics to the next level: AI Policy. History shows that policy and lawmaking do the trick when it comes to protecting liberty, democratic values, and human rights.

Kathrine Stampe Andersen | Member of the board of directors, 2021.AI | August | 2020

The history of mankind is a history of various political entities created by the human race. Throughout history, great policy thinkers have expanded basic self-ruling systems into monarchies, and towards the democracies we know today that guarantee civil liberties. The ancient Greeks. The Age of Enlightenment. The French Revolution. The American Dream stated in the Declaration of Independence. Again and again, history shows the impact of fighting for rights, transparency, and fact-based decisions.

The connection between AI and policymaking

Although historians primarily emphasize the importance of understanding societal issues in a historical context, tech executives know there is a timeless connection between policymaking from the great past and the future. In a future AI world fueled by big data, human rights and individual liberties have never been more important principles to reclaim. We need to govern the hidden impact of AI.

When Thomas Jefferson (1776) proposed a philosophy of human rights to all people in the Declaration of Independence, he claimed that… “all men are created equal, they are endowed by their creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.” The Declaration of Independence was a vital part of democratic policy – and lawmaking because it contained universal human rights and the ideals and goals of an entire nation. Policymaking mattered in people’s everyday life!

Going forward, emerging technologies such as robotics, 5G & 6G, IoT, algorithms, etc. hold high power in solving some of the future world’s most complicated problems that we also face today. Artificial Intelligence can already assist human beings in the battle for thwarting climate change by, for example, providing better water predictions, developing smart products for smart homes, smart thermostats to regulate energy consumption, conserve water, etc. AI is already helping us fight the viral pandemic, Covid19, by helping us understand the virus and accelerate medical research on drugs and treatment, detect and diagnose the virus, predict its evolution, and help prevent and slow the spread through surveillance, and contact tracing.

New technologies pose new risks

But artificial Intelligence also poses new risks for human rights concerning discrimination, surveillance, transparency, privacy, security, freedom of expression, the right to work, and access to public services. AI brings novel ethical challenges to the table that threatens human rights for its users and non-users. We are facing a post-modern version of history.

Hence, a countless list of ethical guidelines and corporate strategies within organizations to address the issues of 1) transparency of AI algorithms 2) bias/fairness of AI/ML algorithms resulting from the data/AI model training 3) risk assessment and a viable mechanism for uncovering ethical issues due to AI, etc.

The big question now is whether corporate ethical guidelines will do the trick? Will high-level reflections and corporate guidelines be the right toolbox to govern the most powerful technology, fight the impact of shadow AI, and protect the individual making sure that Thomas Jefferson’s proposal on civil liberty will be reinforced along with AI transformation?

Policy frameworks for the future

My answer is that we need to look into specific policy options to benefit from AI. World Economic Forum, UK Guidelines for AI Procurement, Danish principles for AI, and the Canadian AI Assessment Tool have inspired regulatory initiatives. Additionally, the American National Security Commission on AI recommends a Key Considerations Paradigm.

However, European policymakers need to consider the following policy frameworks, of which some are already on the table. The cornerstone of the following centralized framework could, for example, address the shadows of AI:

  1. Mandatory requirement of a AI Certificate: AI Suppliers are required to hold a certificate to advance for a new governmental “Pre-Qualified AI-supplier list” supplying AI products, services, and operations to the Public Sector.
  2. Obliged Risk Assessment: Initiating a new governmental AI Risk Assessment Tool where any governmental entity uses a Risk Tool supervising, controlling and mitigating potential issues with the deployment of AI, ML and intelligent use of data.
  3. Standardized Data Dash-Board required for commercial data operators: Companies identified as ´data operators´ will provide each end-user with an assessment of the economic value that the commercial data operator places on the data of that end-user and additionally conveying the exact types of data that are being collected.
  4. Audited AI Accountability Report: All companies deploying AI/ML must address any ethical and human rights concerns regarding AI/ML and how to mitigate the concerns.

About the author

Kathrine Stampe Andersen

Kathrine Stampe Andersen

Member of the Board of Directors, 2021.AI

Kathrine is a Member of the Board of Directors at 2021.AI. She has 10+ years of experience within tech, telco, and media industry. Her previous roles include Senior VP in the telecom incumbent TDC Group, as well as Market VP in The Danish Chamber of Commerce.

You might also like…

The Ethical AI Newsletter

It’s not fake. It’s not artificial. It’s real news! Sign up for our Ethical AI newsletter and get the latest AI insights from our data science and AI experts.

Book a demo