
In recent years, AI has evolved from science fiction to part of our everyday lives. Emerging tech is on the cusp of revolutionizing global value chains. Shaping a “new corporate tomorrow” is already taking place, and the ethical issues related to AI have been laid out in numerous executive debates. To secure democratic values and a high standard of transparency, we must take AI Ethics to the next level: AI Policy. History shows that policy and lawmaking do the trick when it comes to protecting liberty, democratic values, and human rights.
The history of mankind is a history of various political entities created by the human race. Throughout history, great policy thinkers have expanded basic self-ruling systems into monarchies, and towards the democracies we know today that guarantee civil liberties. The ancient Greeks. The Age of Enlightenment. The French Revolution. The American Dream stated in the Declaration of Independence. Again and again, history shows the impact of fighting for rights, transparency, and fact-based decisions.
Although historians primarily emphasize the importance of understanding societal issues in a historical context, tech executives know there is a timeless connection between policymaking from the great past and the future. In a future AI world fueled by big data, human rights and individual liberties have never been more important principles to reclaim. We need to govern the hidden impact of AI.
When Thomas Jefferson (1776) proposed a philosophy of human rights to all people in the Declaration of Independence, he claimed that… “all men are created equal, they are endowed by their creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.” The Declaration of Independence was a vital part of democratic policy – and lawmaking because it contained universal human rights and the ideals and goals of an entire nation. Policymaking mattered in people’s everyday life!
Going forward, emerging technologies such as robotics, 5G & 6G, IoT, algorithms, etc. hold high power in solving some of the future world’s most complicated problems that we also face today. Artificial Intelligence can already assist human beings in the battle for thwarting climate change by, for example, providing better water predictions, developing smart products for smart homes, smart thermostats to regulate energy consumption, conserve water, etc. AI is already helping us fight the viral pandemic, Covid19, by helping us understand the virus and accelerate medical research on drugs and treatment, detect and diagnose the virus, predict its evolution, and help prevent and slow the spread through surveillance, and contact tracing.
But artificial Intelligence also poses new risks for human rights concerning discrimination, surveillance, transparency, privacy, security, freedom of expression, the right to work, and access to public services. AI brings novel ethical challenges to the table that threatens human rights for its users and non-users. We are facing a post-modern version of history.
Hence, a countless list of ethical guidelines and corporate strategies within organizations to address the issues of 1) transparency of AI algorithms 2) bias/fairness of AI/ML algorithms resulting from the data/AI model training 3) risk assessment and a viable mechanism for uncovering ethical issues due to AI, etc.
The big question now is whether corporate ethical guidelines will do the trick? Will high-level reflections and corporate guidelines be the right toolbox to govern the most powerful technology, fight the impact of Shadow AI, and protect the individual making sure that Thomas Jefferson’s proposal on civil liberty will be reinforced along with AI transformation?
My answer is that we need to look into specific policy options to benefit from AI. World Economic Forum, UK Guidelines for AI Procurement, Danish principles for AI, and the Canadian AI Assessment Tool have inspired regulatory initiatives. Additionally, the American National Security Commission on AI recommends a Key Considerations Paradigm.
However, European policymakers need to consider the following policy frameworks, of which some are already on the table. The cornerstone of the following centralized framework could, for example, address the shadows of AI:
Stay up to date on our latest news and industry trends