December 2024

AI Governance – Step 2: Register your AI systems and models

Björn Preuß
Chief Data Scientist
AI Governance
AI Registry
GRACE AI Platform

Have you registered all of your AI systems?

Without a clear understanding of all artificial intelligence activities within an organization, regulatory compliance may be compromised. Failing to manage these projects collectively can lead to the emergence of “shadow AI” – unauthorized AI initiatives that might pose significant risks and disadvantages.

Upcoming regulations

If one wants to effectively control the AI systems in an organization, the first step is to maintain a clear overview. This overview will also help to scope the required actions enforced by upcoming regulation like the EU AI Act.

Maintaining an AI inventory

An essential component of any AI Governance process is maintaining an inventory of all AI models and systems.9,11 This registration process is essential for tracking AI systems, monitoring their performance, and ensuring compliance with governance policies. For organizations managing high-risk or mission-critical AI applications, this step is particularly vital for fostering transparency and accountability.11

What to include in your AI inventory

The inventory includes comprehensive registration of use cases, systems, models, and data, along with the management of this portfolio.9 This process also involves mapping the registered elements to the necessary frameworks and controls, ensuring that all components align with regulatory and internal requirements. A platform like GRACE AI Platform can facilitate this entire process, providing the tools to effectively manage and oversee the AI compliance landscape.

Key actions:
  • Document the use case: Maintain a registry of each use case with specific details and links to the associated systems and models.
  • Maintain a model registry: Develop a central repository for all AI models, ensuring that each system is properly cataloged along with its purpose, risk level, and operational environment.
  • Document model lineage: Track changes made to AI models over time, including version control, updates, and modifications to ensure traceability.
  • Assess risk levels: Categorize AI models based on their risk level. High-risk models, such as those used in healthcare, finance, or law enforcement, should receive additional scrutiny1,5.
Practice

The insurance company was managing various large language models (LLMs) at different development stages, ranging from ideation to development and production. GRACE provided comprehensive oversight of these systems’ statuses and clarified ownership for each project. This visibility empowered the Chief Information Officer (CIO) to regain control over AI initiatives, facilitating well-documented and managed services across the organization. As a result, the company was able to mitigate the risks associated with shadow AI, ensuring that all AI efforts were aligned with organizational goals and governance standards.

References
  • 1European Commission, “Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence,” Apr. 2021.
  • 5 NIST “Artificial Intelligence Risk Management Framework”, 2023
  • 9Wirtz, Bernd W., et al. “The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration.” International Journal of Public Administration, vol. 43, no. 9, Apr. 2020, pp. 818–29, doi:10.1080/01900692.2020.1749851.
  • 11 Taeihagh, Araz. “Governance of Artificial Intelligence.” Policy and Society, vol. 40, no. 2, Apr. 2021, pp. 137–57, doi:10.1080/14494035.2021.1928377.

Transcript

More news

Get the latest news

Stay up to date on our latest news and industry trends