Listen on Spotify
Watch on YouTube
Welcome to our seventeenth episode of AI Watch!
In this episode, 2021.AI’s Senior AI risk advisor, Avishay Gaziel, breaks down the EU AI Act and its implications for businesses. He discusses the top challenges in AI governance, including data management and privacy, and emphasizes the critical need for GDPR compliance before implementing AI solutions. Avishay also highlights the growing importance of responsible AI and the skills required for effective AI risk management in today’s heavily regulated environment.
Ditte: Welcome to this AI Watch episode. Today, I have Avishay with us. He’s a senior AI risk advisor here at 2021.AI, and today we will be talking about the EU AI act. We had a really engaging, great webinar on that subject a few weeks ago, and we decided to bring back some of those points from that webinar.
Avishay: Yeah, indeed. Very, very interesting. I think we have tried to engage our audience in the conversation. So aside from answering very valid questions that those who attended the webinar could revert back to, we also launched four surveys.
Ditte: Yes. Where we got a decent amount of responses. I think.
Avishay: I think we need to be honest to ourselves in the audience to say this has not been, exactly, the most meticulous survey, but I think it raises some very interesting points that are worth discussing.
Ditte: Definitely, it’s a good indicator, and also with your expert knowledge, you can confirm if this is what you’re also seeing when you go out there and meet companies and talk to whoever is trying to work with AI and regulation and managing risk. So the first poll that we asked was which of these areas you find most challenging to govern? And in this response, we see that 52% answered that data is the most difficult one to govern. Does that surprise you?
Avishay: No, not at all. I mean, it’s obvious AI is a very data intensive discipline and organizations need a significant amount of data, and this data needs to be organized, needs to be labeled, needs to be captured, and then on top of that, also needs to be governed. It’s so secured, stored, and backed up. And in an organization that, for example, has several generations of an IT environment, already there you would find five different databases and different data lakes and storage mechanisms and backup mechanisms. And having all of this under control is a significant challenge. Add to that is the amount of data that is being generated and consumed is enormous. This program just expands every day.
Ditte: So it’s not just a challenge for big, large enterprises. Is it for all organizations? Or is there a difference in the size of the company?
Avishay: I think, again, the more data you have, the more tempted you are to use AI and to benefit from the technology. But I don’t think it relates to the size of the organization. It relates to the amount of data. There definitely could be a small organization that has a lot of data, because they are data processors, they use a lot of transactional information data. So I think it goes hand in hand with the amount of data that the company has. But I think it also goes hand in hand with the complexity and the legacy of the infrastructure. Where is this data? How is it captured today? How is it maintained, secured, and managed today? That also plays a big role in this.
Ditte: So another response here is that 23% finds that privacy and GDPR is very difficult to govern as well. That’s not surprising either, right?
Avishay: No, not really. When you set off on developing an AI, you want to put all the data that you possibly can. Some of this data would be privately identifiable information, but a lot of this data would likely fall under GDPR in Europe in one way or another. We all know that organizations have struggled and still struggle with GDPR, and the interplay between GDPR and the EU AI act is not entirely clear yet. I think we had some questions on that during the webinar as well.
Ditte: Yeah. Is that a typical concern, that you feel like you’re not even there yet with your GDPR efforts, you don’t feel like you’re fully in place with that, and now you’re looking into. I don’t know if it’s harder, but now you’re looking into the EU AI act.
Avishay: I don’t think it’s harder. I think it’s different.
Ditte: Yes. But is there a frustration that, oh, we’re not even there yet with GDPR, and now we have to also comply with this EU AI act? Do they feel like, is it even possible to begin with that journey before you’re settled in or ready with all your GDPR efforts?
Avishay: I think we need to make it very clear that those who are not yet there with GDPR should probably not bother with AI at the moment. Get that under control and then move on, because you’re amplifying what could be a significant compliance issue. So if someone doesn’t feel very secure about their GDPR compliance, I would definitely go there first, and when that is under control and understood, then I would venture into AI.
Ditte: Jumping into the next question that we asked at the webinar. We asked, how important is responsible AI for your business? And here we can see that 46% find that it is a license to operate. Does that surprise you?
Avishay: I think this surprised me and the sheer size of the response. When we designed the survey, I chose a very strong statement in “license to operate”, expecting that there will maybe only a few startups and probably some technology companies that are very, very focused on AI that will choose that. However, the audience was not only startups. And then we came up with that data point which made me raise my eyebrows and think, how come?
Ditte: Yeah, because they could answer, we only want to be compliant.
Avishay: And when it comes to legislation and compliance, the general perception of that is people just want to comply, they want to do whatever is needed. It’s like everybody wants to pay exactly their taxes that they need to pay. Nobody wants to voluntarily pay tax. I don’t know many people like that. However, the definition and the perception of responsible AI is much higher and licensed to operate. I think it becomes very interesting when we take the other two questions that we had into account, and if you correlate that with the significant number of participants, we are experiencing a pretty heavy regulatory burden.
Ditte: Yeah. Maybe we should just quickly mention that question. How would you define your current regulatory burden? And what you’re referring to is that 51% answer that it is heavy, right?
Avishay: Yes, and then the next question. Are you managing technology risks? And over half of the participants say that, yes, they do, but they need to adapt it too, to the age of AI. I think these three data points together can make me reflect how those organizations would react to the regulation and what is their actual viewpoint in this? Is this a deterrent because they have so much regulatory burden and it is a license to operate to run or to develop responsible AI? Would that prevent them from actually embarking on an AI journey? Because, you know, nobody wants to lose their license to operate. AI is not compulsory, it’s a voluntary choice. And if half of the participants already know how to manage risk and they just need to adapt it to AI, does that mean that they are ready to do that, or are they still debating whether this is a worthwhile effort and exercise?
Avishay: My gut feeling tells me that those who, or maybe even a hard wish, those who feel that responsible AI is very, very important for their business, are actually the ones who are under very heavy regulation, which means that they want to use this technology for something that is very, very influential.
Ditte: But when you go out to companies, do you hear that internal discussion, is this something that we really want to do because we’re so heavily regulated?
Avishay: Definitely, yes, it is a consideration.
Ditte: And is that also due to, maybe you say that they’re used to managing risks, but do they have the competencies to manage AI risk?
Avishay: I think this is where we missed a question in the survey. I think if I had the opportunity to send another question, that would be the question, how are you going to actually manage those AI risks? Are you going to focus on the technical aspects of those risks? Or is it something that you would integrate in your entire enterprise risk management and would do it more holistically? And what’s the play? How is this going to play out?
Ditte: Well, time’s up. Thank you for sharing your insights, and thank you for listening.
AI Watch Video Newsletter
Get the latest know-how from those in the know. Sign up for our AI Watch Newsletter and receive the latest insights from AI experts.