GRACE Governance for Large Language Models (LLMs)
Address and mitigate the concerns and risks associated with the use of LLMs in your organization
Responsibly enhancing human decision-making
Many organizations are utilizing AI in a production setting without proper governance in place. This is now more widespread than ever, with LLMs like ChatGPT being widely used in daily work in many organizations and businesses.
Common concerns when using Chat GPT and other LLMs:
- Intellectual property leaks to third parties
- Usage and privacy-related information leaks to third parties
- Costs related to the usage of the models is not under control
- Monitor usage and report inconsistencies
The OECD.AI called for trustworthy AI use cases for tools they have validated to be a part of their global trustworthy AI catalogue. GRACE was validated and our use case, Enterprise ChatGPT and LLM Governance has been selected for this new catalogue as the first solution for LLM Governance to be approved by the OECD.AI.
With GRACE Governance for LLMs you will be able to:
- Onboard LLM users according to conformity assessments & company policies.
- Mask and anonymize sensitive information such as Personal Identifiable Information (PII) to the LLM while still offering to interact with the model.
- Real-time monitor the prompting and responses (in- and output from the model).
- Define guardrails and risks around the LLM and monitor that they are followed in real-time.
- Log and report breaches for follow-up and preventive training of employees.
- Sentiment analysis of prompts and model responses for biased or discriminatory or hate language.
GRACE Governance for LLMs offers you:
GRACE can be used to perform conformity assessments around the usage of Chat GPT, ensuring that it aligns with organisational policies and regulatory requirements.
Real-time LLM monitoring
GRACE offers logging LLM input and output, managing risks and controlling that guardrail requirements and rules are met.
GRACE ensures that Chat GPT is being used in a secure manner, protecting sensitive information and preventing data breaches.
GRACE provides greater transparency into how the LLM is being used. Logging input (prompts) and feedback (responses).
GRACE can limit access to the LLM or restrict certain types of input. It detects responses with biased, discriminatory or hate language.
GRACE computed performance metrics live, enabling organisations to continuously monitor Chat GPT usage and identify areas for improvement.