
In this episode of AI Watch, we’re joined by Malou Lokdam, Product Data Scientist at 2021.AI, to talk about how organizations can extract controls from internal and external policies lowering the entry barriers to real AI governance.
Learn how the GRACE AI Platform’s new AI agent automatically extracts controls from complex compliance documents like the EU AI Act and ISO 42001 and internal policies—making Responsible AI scalable, efficient, and actionable.
✅ No more manual mapping
✅ Instant control suggestions from any policy or standard
✅ Just time-saving, scalable governance
This is how AI governance should work — built into your workflows and ready to scale.
Bjørn: Hello and welcome to this episode of AI Watch. Today I have my colleague Malou with me and we will be talking about how to go from a policy into controls when we are thinking about AI governance. So a very exciting area and something where we are also using AI in the coming GRACE AI Platform releases. Malou, tell us a bit about yourself and what it is you do at the 2021.AI. Malou: So I'm a product data scientist here. So I am primarily developing AI features for our governance platform. So that means reading and understanding a lot of governance frameworks and then working out how we can implement tools on our platform that can help companies implement governance and be compliant with these frameworks. Bjørn: So you're a big part of the AI governance offering that is inside the GRACE AI Platform. I am. So could we talk a bit about what is the challenge of going from a policy, so a regulation, a standard, a framework into actually controlling this up against an AI system or model, for example. Malou: Yeah. So a lot of the challenges that we're seeing is that the companies, they are using a lot of manual efforts to read through a lot of policies, whether they are internal, they can also be external policies, guidelines, frameworks that they have to read through and kind of manually find out controls in those documents. It's a very long process. Bjørn:It also sounds like a process where you have to be very precise on what you're looking for and maybe also need to understand both the regulation, the framework and the technology. the AI you're trying to govern. Speaker 2 (01:53) Exactly. So you need to have a lot of governance teams where you have both the legal and the technical experts in the team who are able to understand both sides of it. Speaker 1 So we don't just have the regulation, we also have the standards and then we have the company policies, all of those, know, extracts into some controls if you wanna, you know, comply and govern your AI. How's the overlap between these things? In my mind, it might be something where we ask the same thing, we'll see the same controls going across these three levels we just talked about. Is that what you're experiencing as well when you're using your tool? Speaker 2 (02:30) That's one of the challenging parts. So in the EU AI Act, we know that they're talking about a quality management system and establishing accountability for an AI system. And in ISO 42001, they're talking also about roles and responsibilities. So they're talking about the same things, but it's different terminology. It's a different way of saying all of the same things. So we want to create a set of frameworks where you can actually see the overlap of where these frameworks are addressing the same issues and also where they are not. And we want to combine that with the policies that a company already have. Because a lot of the companies that we're dealing with, they have a lot of policies that are still very relevant and they might already be covering a lot of the controls that are defined in other more the external frameworks. So being able to navigate when you have already established controls for the system and when you need to draw on external sources is a really big help. Speaker 1 (03:35) So in the scenario where I already comply with some internal standards and ISO 400001, is it easier to go in and comply with the EU AI Act when you're talking about this overlap of control? Speaker 2 (03:47) We can't be completely sure right now how many of the ISO standards are going to cover the EU AI Act, but we have an idea of which ones are relevant, not just for the EU AI Act, but also in general. What is good data governance? You can use some of the standards, the frameworks that are already out there. You can use some of the policies you've already established in your company and then align with some of the regulations that are coming. Speaker 1 (04:13) Everybody can basically get started right now on their own policies and some of the standardizations that they already are committed to following and then have an easier transition into the EU AI Act once that is finally formed. That's a good start. Malou, I know you sit in the product team and I sit in the sales team. So we see things from different angles. What I experienced when we start talking to a company around AI governance is also that Speaker 2 (04:27) Exactly, that's the goal. Speaker 1 (04:43) they need to move away from often some manual registry and into a solution like ours where we are talking governance in a very different way. I also see that they think they want to start with the EU AI Act, but when they actually, when the ball starts rolling, we also see the same that then they bring in standards and then a lot of stakeholders suddenly arrive at the table saying, well, I have this policy. I have this security assessment that would also need to be filled out. Is that also what you're trying to solve with your solution that everybody can bring whatever they need or whatever they have of policies or controls and then your solution will help along that path. Speaker 2 (05:24) We're trying to make a feature that will standardize this process more, because we know that a lot of policies are spread out across the organization and different teams have different responsibilities. But if you have a platform where you're able to, no matter if you have a legal background or technical expertise, you have the ability to use this feature, upload any kind of document and extract controls from it that can be used across teams and across the organization. Speaker 1 (05:55) Let's talk a bit about this upload your policy and extract control. Is that something where I build a control on my own or what is it that you're really working on in the product team? Speaker 2 (06:07) This feature, we've made it possible to upload a document and the output is really just getting controls from that document into the GRACE AI Platform. So this is an agent that's working in the backend. You don't have to prompt it. There is no actual user interactions. So you can just upload the document and you will get the result from. Speaker 1 (06:28) So I can come with my policy, the assessment we just talked about before that used to live in Excel or whatever, give it to you or give it to your product and out will come the controls relevant for the AI I'm trying to cover. Speaker 2 (06:42) Yeah. And then one of the features in this is that when we upload all of the controls, we will also do a check to see if some of the controls that you are uploading are already on the platform. So we're doing kind of similarity check between the controls so that if you're uploading a policy document and we can see that there's a control that's very similar to something that we find in a standard, then you don't actually have to upload that. But we can do a mapping instead, saying that this control is relevant for this standard and this policy that you are about to upload. Speaker 1 (07:16) If I understand it correctly, once I bring my policy, you will extract a control. If somebody else brings another policy that has a similar control, you would automatically detect this. Speaker 2 (07:27) Yes. So if you have a control that is present in multiple frameworks, you don't also have to answer to the control for each framework. But if you answer it one time, then it will spill over to the other frameworks. Speaker 1 (07:38) Alright, that sounds like a very efficient way of actually having to comply with these controls. Speaker 2 (07:43) Definitely. It reduces a lot of time for the people who have to sit on the platform and actually attest to the controls and describe what they have done in order to implement it. We also have tests for the controls, activities that you need to perform in order to document or proof that you have actually done what it is being told in the control. Speaker 1 (08:05) And what if I own one of these frameworks and the control is shared with nine other frameworks? Can I have a view of just my own policy even though it's a shared control? Speaker 2 (08:16) Yeah, so you can always see if this control has been attested by someone else as well. It doesn't necessarily have to be you who's sitting doing it. It can be someone in your team who has been doing it for another framework. Speaker 1 (08:28) I can see it from my own framework's perspective or policy or standard, even though it's a shared control with others. So in the scenario we were talking about where everybody brings their policy or standard, the overlap is gone. You'll only ask once, you'll distribute the answers and I can still see it from the way I'm used to seeing things. Speaker 2 (08:47) Exactly. And that also ties together with controls having different domains. So some of the controls will be organizational. That's something that your whole organization has to make sure is implemented. So that doesn't just apply to each of the systems and you have to attest it for each of the systems. You do it once for your organization and then that will apply to all of the systems that you're using it for. Speaker 1 (09:11) Let's say I'm looking at my frameworks, all the controls are attested to, I'm happy, but then something new happens, a regulation change or we do a change on our internal policy. The old scenario, I would have to go out to everybody and have them fill out the new controls. How would it work in the governance? Speaker 2 (09:29) Well, one thing is that you can use the system. You can go in and always edit the frameworks or the specific controls. If something has changed in a control, you will send a notification to your teammates saying that you have to revisit it. You have to look at the documentation again and attest it. Of course, if it's a whole new framework or a new standard, then you can always use our control extractor agent to get all of the new controls into the platform. Speaker 1 (09:58) With new controls, it's basically only the new ones that isn't covered already. Speaker 2 (10:02) Exactly. So when rerunning it, it would again check if there are some controls already on the platform or if you need to implement the new ones. Speaker 1 (10:12) Sounds a bit smarter than running it the old fashioned way and filling it out manually. For companies that we engage with, I know a lot of them, the sit with their policies and their frameworks, but what you're saying is if they bring it to you, you will extract the controls for them. So I think this starting point seems much easier for clients. Is that also what you experienced? Speaker 2 (10:16) Yes. Exactly. We're trying to enable the different teams to actually get started with governance. Even after you've used this feature, you would still need to review the controls. You can go in and change them and see if they're actually valid. And you would need someone who knows whether these controls are actually relevant for the AI system. But we believe that this is going to take you 80 % of the way in five minutes. And I think that's going to be a huge help. Speaker 1 (10:58) Sounds like a good starting point for starting your AI governance. All right, Malou, thanks for taking us through this policy extraction tool. Is this available for clients already now or how do I get this? Speaker 2 (11:11) This will be a part of the relaunch of our platform. So we are developing a lot of new features for the new governance module. The control extractor is just one AI agent that we have developed for our governance module. But with the relaunch, we will also be looking into developing agents that can help with risk assessments based on the AI system that you registered on GRACE AI Platform. Speaker 1 (11:34) And maybe more AI enabled features. Speaker 2 (11:36) Yes, definitely more AI enabled features. Speaker 1 (11:39) Sounds very cool. I'm looking forward to sharing this with clients and also to easing their way into AI governance. Thank you for watching and I'll see you next time for a new AI Watch episode.
Stay up to date on our latest news and industry trends