December 2024

AI Governance – Step 3: Establish and manage AI risks and controls

Björn Preuß
Chief Data Scientist
AI Governance
AI Risk Management
GRACE AI Platform

Are you losing control of your AI?

Effectively managing the risks associated with AI is essential for ensuring that models operate reliably and ethically. This process involves conducting thorough risk assessments, identifying potential biases or unintended consequences, and implementing controls to mitigate those risks.

Guidance from SR 11-7 and ISO 42001

The SR 11-72 guideline offers a structured approach to model risk management that can be adapted for AI systems, emphasizing the importance of addressing potential biases, errors, and ethical concerns. Similarly, the ISO 42001 standard6 comes with a clear set of questions and requirements.

Facilitating AI risk management

The GRACE AI Platform supports these requirements by facilitating the risk management of AI systems based on 1) use case characteristics, and 2) the types of proposed AI system components or models, in relation to a configurable Risk Taxonomy. This Risk Taxonomy should be closely aligned with an enterprise-wide Risk Framework if one exists, while also allowing for the definition of more granular risks. Some of these risks may be incremental and specific to AI or to particular AI types, enabling organizations to manage their AI-related risks more effectively.5

Key actions:
  • Conduct regular risk assessments: Periodically assess AI models for risks such as algorithmic bias, data privacy issues, and model accuracy. Use a risk-based categorization approach to focus resources on high-risk AI systems1.
  • Implement bias mitigation techniques: Employ fairness algorithms and diverse datasets to detect and reduce biases in AI decision-making processes1.
  • Create AI-specific controls: Develop controls that specifically address the unique risks associated with AI, including explainability and model interpretability, to ensure transparency in decision-making4.
Practice

The insurance company faced the challenge of complying with a wide range of regulations, including internal policies. These legal texts needed to be translated into meaningful controls that could be applied from a risk management perspective. The controls included both procedural and technical measures, such as metric-based assessments. They served not only as a control mechanism but also as guidance for developers regarding what to implement in the models. This included metrics for measuring bias and the necessary mitigation processes to address any identified issues.

References
  • 1 European Commission, “Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence,” Apr. 2021.
  • 2 Federal Reserve Board, “SR 11-7: Guidance on Model Risk Management,” Apr. 2011.
  • 4 AXA, “Artificial Intelligence: Responsible AI and the path to long-term growth”, 2022
  • 5 NIST “Artificial Intelligence Risk Management Framework”, 2023
  • 6 ISO/IEC 42001 Information technology — Artificial intelligence — Management system, 2023

Transcript

More news

Get the latest news

Stay up to date on our latest news and industry trends