AI WATCH EPISODE #18

AI Risk and Data Science: A crucial partnership

By Björn Preuß, Avishay Gaziel

Listen on Spotify

Watch on YouTube

Welcome to our eighteenth episode of AI Watch!

In this episode, our Chief Data Scientist, Björn Preuß, and Senior AI risk advisor, Avishay Gaziel, discuss the crucial role of collaboration between data scientists and risk professionals in developing responsible AI. They emphasize the importance of establishing context, identifying risks, and monitoring models in areas such as bias, performance, safety, and security.

Björn: Welcome to AI Watch. My name is Björn, and I’m the Chief Data Scientist at 2021.AI. Today, I have with me Avishay, who is a professional AI risk advisor. Avishay, welcome to AI Watch.

Avishay: Thank you, Björn.

Björn: With Avishay being a risk management professional and myself being a trained data scientist, that would in a lot of organizations clash, or at least needs to be a bit aligned in some cases. Avishay, in your work, when do you need input from data scientists? What is important in the collaboration?

Avishay: I think I would like to underline that the collaboration and the input from data science is crucial. If we are all expected to develop responsible AIs and the vehicle for that is risk management, then the role of data scientists in this is crucial. It’s very, very important. Yes, I by far need data scientists.

Björn: So have you done, that’s very crucial, right? There are probably certain risks where you, as a risk professional, need a bit more input from data science than others. Are there some specific risks that you say are quite important to collaborate on?

Avishay: I think there are four or five areas where the involvement of data scientists is crucial: bias and everything to do with bias and fairness, both identifying what kind of bias is in a model and where fairness can become an issue. I think performance is very, very important because AI models that perform badly, even in terms of stability, could lead to bad results or bad predictions and generations. And so, that’s another very important area. Accuracy, which goes very much hand in hand with performance, I think. Safety. It’s a big, big topic, also on the regulatory agenda, and of course, definitely a precondition for any kind of responsible AI. And then, of course, there’s the security domain, which is not exclusive for data scientists only, but data scientists would have a strong influence and say in the security risk assessment of an AI. So, yeah, definitely.

Björn: I’m happy to hear that because most of the things you mentioned are obviously statistical measures where data science is, or should be, a good data science should be very swift and responding on how to measure certain things. I think, coming from my perspective also, I could add that where you would request information from data scientists and would like to understand them a bit more, data scientists obviously would also like to get some more input from risk management on what is actually expected, right? Because we have the toolkit on the statistics on how to measure things, but we’re not necessarily having the experience in regulation. What is required of us for which model? I think that could be a very neat way also to exchange knowledge a bit more so data sciences know what to focus on actually.

Avishay: If you and I needed to go to do a risk assessment tomorrow, I would split the process into three, and I would very much like the data scientists team to be engaged in all three. The first one is establishing the context. I think the first part is really together, understanding the use case. What are the use case limits? Where is it that there’s different restrictions, maybe? I can come with restrictions that come from the law or from other frameworks or requirements, and there are other restrictions that are technical. Those need to be baked into the context of this risk assessment that we’re doing. We have the business context, we are operating normally in some kind of for-profit setup, and then there’s a model context. And this is where data scientists will surely play a big, big role in what kind of model we are making. That also has a tremendous effect on what kind of risks to expect and, of course, how to measure them and how to monitor them. So I think that’s the first step, it would be establishing the context, and then would go directly, almost always directly, into identifying those specific risks with this context in mind. So we know what the model use case and the business and what the model is about, and then we go into looking at, okay, so what can go, you know, where is it that we may meet some challenges? With bias, safety and so on. With that in mind, then, the last phase I believe in is security risk assessment, which is a separate risk assessment, a very, very important one. And for that we can talk about just that for a whole hour.

Björn: So, of course, and when it comes to stage two, there might be different types of models, different, you know, classification models, LLMs, whatever, and you might measure things very differently, right? So that would also require quite some more time to go into depth, and I think that will be future discussions and we should maybe have. Every risk needs to be monitored, every risk needs to be measured, and then it needs to be monitored. What is the most efficient way to measure? Is there a technical way? Is there an automated way to measure that risk, either point the likelihood of that or the impact of this risk. And is there a way to monitor this efficiently that is not, you know, manually going and poking the shoulder of a data scientist and asking what’s going on with your fairness parameters or whatever? Is there a way to get more precise and more efficient on both the measurement and the monitoring of risks?

Avishay: Specifically monitoring, I think many, many risks generally in technology risk management, monitoring of risk is a challenge because it can be very, very detailed and very, very technical. It requires technical skills. And AI is not different, it’s a technology responding. It’s the same problem, just with a very, very niche, kind of. The world has learned how to monitor database performance and other things. There’s a lot of tools there, but in AI, it’s not as mature and as established as it is in other technologies.

Björn: Yeah, and monitoring is one thing and then the other thing where again, I think collaboration is very key on that, is that you also need to find what is the right threshold of the things like how much bias. Let’s say bias is in every machine learning model, right? There’s statistical patterns and that’s how machine learning works. So the question is which bias do we accept? To which extent and which bias do we not accept in which current use case? I think there’s definitely a lot to discuss and if you are wondering how to do that in more depth, right? Avishay and I will have a course as a little teaser in the autumn of this year. So if you’re interested in that, check in the description below. Otherwise I would. Thank you very much, Avishay, for being here with me today. Thank you for your interesting insights and comments and for you. Looking forward to seeing you next time for AI Watch.

Avishay: Thank you for watching.

Björn Preuß

Björn Preuß

CHIEF DATA SCIENTIST, 2021.AI

Björn is an Assistant Professor at CBS and the Chief Data Scientist at 2021.AI. He is the company’s industry leader in accounting and legal processes and works closely with financial clients.

Avishay Gaziel

Avishay Gaziel

SENIOR AI RISK ADVISOR, 2021.AI

Avishay is an AI Risk Expert. He has experience in security and risk in the finance and pharma sectors. More than 8 years of experience in executive and innovation management and entrepreneurship. And more than 12 years in IT risk/security consulting, corporate innovation, and regtech/fintech startups.

Watch the previous episodes

Ai Watch : Episode 17
AI Watch - Episode 16

AI Watch Video Newsletter

Get the latest know-how from those in the know. Sign up for our AI Watch Newsletter and receive the latest insights from AI experts.