
Effectively managing the risks associated with AI is essential for ensuring that models operate reliably and ethically. This process involves conducting thorough risk assessments, identifying potential biases or unintended consequences, and implementing controls to mitigate those risks.
The SR 11-72 guideline offers a structured approach to model risk management that can be adapted for AI systems, emphasizing the importance of addressing potential biases, errors, and ethical concerns. Similarly, the ISO 42001 standard6 comes with a clear set of questions and requirements.
The GRACE AI Platform supports these requirements by facilitating the risk management of AI systems based on 1) use case characteristics, and 2) the types of proposed AI system components or models, in relation to a configurable Risk Taxonomy. This Risk Taxonomy should be closely aligned with an enterprise-wide Risk Framework if one exists, while also allowing for the definition of more granular risks. Some of these risks may be incremental and specific to AI or to particular AI types, enabling organizations to manage their AI-related risks more effectively.5
The insurance company faced the challenge of complying with a wide range of regulations, including internal policies. These legal texts needed to be translated into meaningful controls that could be applied from a risk management perspective. The controls included both procedural and technical measures, such as metric-based assessments. They served not only as a control mechanism but also as guidance for developers regarding what to implement in the models. This included metrics for measuring bias and the necessary mitigation processes to address any identified issues.
Stay up to date on our latest news and industry trends