AI Insights, December 2024
AI Governance – Step 3: Establish and manage AI risks and controls

Björn Preuß
Chief Data Scientist, 2021.AI
Step 1
Step 2
Step 3
Step 4
Step 5
Are you losing control of your AI?
Effectively managing the risks associated with AI is essential for ensuring that models operate reliably and ethically. This process involves conducting thorough risk assessments, identifying potential biases or unintended consequences, and implementing controls to mitigate those risks.
Guidance from SR 11-7 and ISO 42001
The SR 11-72 guideline offers a structured approach to model risk management that can be adapted for AI systems, emphasizing the importance of addressing potential biases, errors, and ethical concerns. Similarly, the ISO 42001 standard6 comes with a clear set of questions and requirements.
Facilitating AI risk management
The GRACE AI Platform supports these requirements by facilitating the risk management of AI systems based on 1) use case characteristics, and 2) the types of proposed AI system components or models, in relation to a configurable Risk Taxonomy. This Risk Taxonomy should be closely aligned with an enterprise-wide Risk Framework if one exists, while also allowing for the definition of more granular risks. Some of these risks may be incremental and specific to AI or to particular AI types, enabling organizations to manage their AI-related risks more effectively.5
Key actions:
- Conduct regular risk assessments: Periodically assess AI models for risks such as algorithmic bias, data privacy issues, and model accuracy. Use a risk-based categorization approach to focus resources on high-risk AI systems1.
- Implement bias mitigation techniques: Employ fairness algorithms and diverse datasets to detect and reduce biases in AI decision-making processes1.
- Create AI-specific controls: Develop controls that specifically address the unique risks associated with AI, including explainability and model interpretability, to ensure transparency in decision-making4.
Practice
The insurance company faced the challenge of complying with a wide range of regulations, including internal policies. These legal texts needed to be translated into meaningful controls that could be applied from a risk management perspective. The controls included both procedural and technical measures, such as metric-based assessments. They served not only as a control mechanism but also as guidance for developers regarding what to implement in the models. This included metrics for measuring bias and the necessary mitigation processes to address any identified issues.
References
- 1 European Commission, “Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence,” Apr. 2021.
- 2 Federal Reserve Board, “SR 11-7: Guidance on Model Risk Management,” Apr. 2011.
- 4 AXA, “Artificial Intelligence: Responsible AI and the path to long-term growth”, 2022
- 5 NIST “Artificial Intelligence Risk Management Framework”, 2023
- 6 ISO/IEC 42001 Information technology — Artificial intelligence — Management system, 2023
AI Governance guide:
Step 1: Defining your AI compliance landscape
Step 2: Register your AI systems and models
Step 3: Establish and manage AI risks and controls
Step 4: Monitor and validate AI models continuously
Step 5: Audit and reporting
You might also like…
Series 1/3. The rise of Shadow AI and what it means for your organization
Shadow AI refers to the unauthorized use or implementation of AI systems and tools within an organization without the explicit approval…
Series 2/3. Shadow AI: how to manage and control AI in your organization
Today we are implementing both traditional AI models and Large Language Models (LLMs) across our organizations to assist us in solving…