AI WATCH EPISODE #22
AI Model Registry: The foundation of Responsible AI
Listen on Spotify
Watch on YouTube
Welcome to AI Watch Episode 22!
This time, we’re tackling the crucial topic of AI Governance and the importance of a robust AI registry. Our experts, Björn Preuß and Liam Sapsford, shed light on the challenges organizations face in managing their AI models and the risks associated with inadequate governance. Tune in to learn how a centralized AI registry can ensure compliance with regulations like the EU AI Act and promote Responsible AI development.
VIDEO TRANSCRIPT
AI Model Registry: The foundation of Responsible AI
Björn: Welcome to AI Watch. My name is Björn. I’m the Chief Data Scientist of 2021.AI AI and today I’m with my colleague from London, Liam. Liam, welcome.
Liam: Hello Björn, pleasure to be here in Copenhagen. Looking forward to discussing more about well, I’ll let you introduce the topic but great to see you.
Björn: Yeah, likewise. Pleasure having you here. Today we’re going to talk about the AI registry. AI registry is quite a hot topic at the moment.
So looking in the market, you will then basically say that starting with the AI registry is a good way to begin in an organization to sort out what models one has and to organize that from an organizational perspective. Is that right?
Liam: Absolutely, and I can’t really understand how organizations can operate without this. From my perspective, it’s really how can you move forward with any form of governance, compliance, or risk management if you don’t know what’s there? It’s a very simple analogy of just understanding how can you actually make any type of AI Governance, policy, controls, framework, risk management, if you can’t simply say, this is the AI I have here, this is the risk level of it, and this is the controls that I’m going to put on it if we carry on with it. So it’s just a logical first step really in the AI governance process. But I’d like to spin it back around to you.
What do you see as the main steps in actually getting an AI registry started up?
Björn: Well, first of all, it will start with a general mapping exercise of the models and systems one has in an organization, right? So, I mean, it’s probably just doing the footwork, going around in an organization and getting that basically done. Now, the other thing is then if one is starting with that to figure out, you could say a setup that works for one to host that registry going forward.
On one side, having the one-time exercise to map out all the models, and then finding a way to have it registered somewhere, and then going forward, have the procedures to ensure that all new models, all new AI systems and use cases and whatever being registered going forward. Because I think there’s also one particular thing that is very important to remember when we talk AI Model Registry, for example, as some people in the market call it, because it’s essentially not just the model, it’s the model, of course, but it’s also the systems that utilize these models and maybe even the use case around the systems. So we could have one model used in five different systems, one as an internal chatbot, one for customer service, and one more for some client interactions in some sort of software deployment. And then we have different use cases around it. So that hierarchical structure also needs to be managed. Then obviously, as you already pointed onwards, the regulations that are imposed on the different use cases, which might also differ.
So I think after getting the list as a first step, then the next thing should be to look into appropriate tooling to solve that challenge going forward.
Liam: Yeah, and if you look at that challenge and think about how a system should be designed in being able to perform what you just described, how do you think the design of an AI register should be set out?
Björn: Well, most likely it should be as flexible as possible in terms of being able to connect to different MLOps environments, maybe even procured systems. And that obviously might limit the depth of integration that is possible there and the level of automation of getting certain measurements later on into the system when we talk about model monitoring. But to begin with, it should be fairly straightforward to get all the different models from all the different environments being registered. So I think that’s a very important thing because, you know, when we look at all the risk management and governance in an organization, what rules everything is the weakest link in it. And if the weakest link is, I don’t know, a certain procured system not being part of anything registered, then, you know, we don’t have any governance coverage on that part of the organization, which would be a failure.
Liam: Absolutely, absolutely. And when we’re talking about design, I think the easy option out is for organizations at the moment to say, yeah, quickly, let’s put it all into Excel.
But there’s a million different challenges as to why that’s not going to scale, why that’s not suitable in the short, medium, and long term. So what are your thoughts on organizations trying to, let’s say, build an AI inventory with Excel?
Björn: The problem for all these governance processes with Excel is obviously that everyone can change it. Yeah, it’s so easy to change. It’s not immutable. And the way Excel works with all different files, you don’t really have a consistent coherent center of registry there. So I think there’s a couple of issues when we talk about it as a governance tool.
There are certain regulatory pushes already that do not allow to use tools like Excel for hosting these kinds of things because they don’t provide an end-to-end order trail at the end of the day. And when we see the model registry as really the starting point to get an overview of one’s AI systems, but also as a starting point to then later on collect other evidence around the models to process certain controls and things like that, then definitely, I would say there should be more appropriate tooling being chosen.
Liam: Yeah, definitely. And especially when you think about what you’re trying to achieve with an AI registry. It’s not just the registration, it’s everything that follows this where you’re going to have to be monitoring the models in real time and then taking this information and building that into the report. So it’s vital that you actually have this extension of capabilities that are simply not there with your typical Excels or traditional legacy Risk Management Systems. So you have to think about how AI is going to evolve and how am I going to evolve with that with the AI registry that I have in place.
So if you’re speaking to, let’s take a CIO office or a Head of Legal or whoever’s going to own this, what would you advise them in their next steps when considering building or working with an AI registry?
Björn: Well, I think it’s first of all good that you raise a question regarding the CIO office because I think they will most certainly be one of the major stakeholders that should be interested in this. Because right now we see the development of, we could label that shadow AI where a lot of parts of the organization starting their own use case is not really central, it’s not really centrally managed or overseen. So the CIO office has to some extent lost the control of all of that. It might depend a bit from organization to organization, but in general, one can probably state that, right? So I think that’s maybe, first of all, a big motivation for them to start with that relatively early on and lightweight.
So what I would probably do is, coming back to your question, Liam, that I would scope out an area of the organization, a group of models and systems and start implementing that as you could say, kind of a POC or beta implementation, rather medium small size and then from there on take the learnings and then replicate that across different divisions or sub-organizations in a company.
So really slice it instead of having a huge implementation project that takes years. Because with that we can really also then adjust what the organization needs. And with that, together with the CIO office, but also Head of Legal or Compliance, map out the additional work after having the AI registry and what needs to be done afterwards. Is it regarding certain compliance checks and controls? Is it regarding certain tests on model metrics that one would like to implement and things like that? So start small and then develop from there.
Liam: This is actually a way that organizations can also start implementing their AI policies or begin their compliance journey with regards to the EU AI Act because it’s very hard to actually achieve anything if you don’t have the oversight.
So I think it’s a clear message to say, okay, start now, start to see what’s there, see what is the risk level and obviously falling in line with the EU AI Act is very important and will guide the majority of organizations out there. But then that gives you actually a foundation to move forward from.
And also start the conversation with all the different other parties, whether it’s the AI Governance Board or different teams from Data Science to Legal. It’s actually a conversation starter to bring people to the table to then start processes that are needed to actually enforce AI governance or your responsible AI priorities.
Björn: Yeah, I mean, I couldn’t agree more. I mean, of course, if you don’t know where the models are and what kind of models you have, you have no chance to figure out whether they are high-risk use cases or medium or no-risk use cases. And subsequently, you don’t have a clue what additional compliance work you need to perform, what kind of measures you need to measure on because you don’t know what model it is. So I think that also looking towards the upcoming regulation, EU AI Act, obviously everyone has heard of it, was now pushed a couple of times again and again. And now it looks as if we’re having roughly 10 weeks or something like that, we need to have the first job done in mapping out which models are high risk or prohibited use cases. So if we don’t have mapped out all the different models and use cases, so how should we do that, right? So it’s actually really good timing now to discuss this and to start with it.
Liam: Ultimately at the moment, organizations and teams are wasting a lot of time where they’re not able to communicate with each other. And if you think about the time money spent on not being able to collaborate around the AI projects that are in production or going to be in production, just purely because they don’t have an AI register, it’s a massive waste of time and resources. So I think just to be able to capitalize on being efficient is enough motivation to actually get started with this.
But yeah, I think we touched on some really great points beyond. So now I’d like to say thank you and yeah, looking forward to our next discussion where we can dive a bit deeper into this.
Björn: Yeah, likewise. Thank you very much, Liam. I think it’s good points to the end. Also, thank you everyone else for watching this episode of AI Watch. If you want to know more, check some more information below the video. Otherwise, I hope to see you next time with the next episode of AI Watch.