August 2020

The EU takes on Trustworthy AI with ALTAI

Mikael Munck
Founder & CEO
AI Compliance
AI Ethics
Responsible AI
AI Governance

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence (AI HLEG) presented the final Assessment List for Trustworthy Artificial Intelligence (ALTAI). 2021.AI is honored to be on the front lines of ethical innovation in the EU, contributing as one of 50 selected companies engaged in an open workstream to provide best practices for the ALTAI final development.

What is ALTAI?

The ALTAI helps foster responsible and sustainable AI innovation in Europe. It seeks to make ethics a core pillar for developing a unique approach to AI, aiming to benefit, empower, and protect both individual human flourishing and the common good of society. We believe that this will enable Europe and European organizations to position themselves as global leaders in cutting-edge AI worthy of our individual and collective trust.

ALTAI - Assessment List for Trustworthy Artificial Intelligence (AI)

Assessment List for Trustworthy Artificial Intelligence (ALTAI)

These ambitions fit perfectly with 2021.AI’s company focus and beliefs, which is why we have greatly enjoyed collaborating on the EU’s initiative. The initiative is firmly grounded in the protection of people’s fundamental rights, which is used in the European Union to refer to human rights enshrined in the EU Treaties, the Charter of Fundamental Rights (the Charter), and international human rights law.

Goal of ALTAI

ALTAI aims to provide a basic evaluation process for Trustworthy AI self-evaluation. Organizations can draw elements relevant to the particular AI system from ALTAI or add elements to it as they see fit, considering the sector they operate in. ALTAI helps organizations understand what Trustworthy AI is, particularly what risks an AI system might generate and engender. It raises awareness of the potential impact of AI on society, the environment, consumers, workers, and citizens (specifically children and people belonging to marginalized groups). It also helps gain insight on whether meaningful and appropriate solutions or processes to accomplish adherence to the requirements are already in place (through internal guidelines, governance processes, etc.) or need to be put in place.

Concepts

Trustworthy AI is defined by three complementary concepts: Lawful AI, Ethical AI, and Robust AI. The Guidelines have a human-centric approach to AI and identify 4 ethical principles and 7 requirements that companies should follow to achieve trustworthy AI. The document is complemented with a set of questions per each of the 7 requirements that aim to operationalize them (the “Assessment List”):

  • Human Agency and Oversight: fundamental rights, human agency, and human oversight.
  • Technical Robustness and Safety: resilience to attack and security, fallback plan and general safety, accuracy, reliability, and reproducibility.
  • Privacy and Data Governance: respect for privacy, quality and integrity of data, and access to data.
  • Transparency: traceability, explainability, communication.
  • Diversity, Non-discrimination and Fairness: avoidance of unfair bias, accessibility and universal design.
  • Societal and Environmental Well-being: sustainability and environmental friendliness, social impact, society and democracy.
  • Accountability: auditability, minimization and reporting of negative impact, trade-offs and redress.
Working with ALTAI

At 2021.AI, we believe that well-developed assessments like ALTAI are a must on the way forward for the responsible use of AI. This is also why we have incorporated ALTAI and most other major local and global impact assessments in our GRACE platform. We offer clients the option to work with these impact assessments individually or in combinations. Our fully configurable AI Governance module also enables you to introduce your company’s own ethical criteria and other metrics while measuring and monitoring your data-and AI projects. At 2021.AI, part of our mission is to make Ethical AI as flexible and easy as possible to implement for our clients.

So, how do I get started?

Experimenting with the ALTAI tool to build up awareness around Trustworthy AI, and weighing different Ethical considerations is a good start. If, after that, you move to fill in the assessment, then you and your company are off to a great start, but NOTE that this is not enough in itself, but rather a start and a first step. Next up, you need to anchor the process and impact assessment in a robust solution that offers you real-time monitoring, oversight, and more.

The ALTAI tool itself is simple and interactive and offers insights into all 7 different requirements.

Communication

Interactive ALTAI tool for Trustworthy AI self assessment

Once filled, the score is generated, along with the recommendations in each requirement segment:

Please also refer to the EU Ethical Guidelines for Trustworthy AI from the 20th of February 2020 for a complete and more detailed impact assessment/list of questions.

Self Assessment Results

ALTAI self assessment score and recommendations

How do I apply these recommendations to my project?

If you are interested in moving to the final step towards a robust implementation, then the GRACE AI platform is something for you. With GRACE, we have implemented a fully configurable impact- and risk assessment solution, where we continuously measure these metrics, while automatically reporting and potentially restricting data- and model behavior.

2021.AI’s GRACE AI platform provides an easy way for companies, regulators, or governments to create tangible metrics for Fairness, Ethics, Accountability, and Transparency.

References
  1. https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
  2. https://altai.insight-centre.org/
  3. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Transcript

More news

Get the latest news

Stay up to date on our latest news and industry trends