Listen on Spotify
Watch on YouTube
Welcome to our tenth episode of AI Watch!
In this episode Avishay Gaziel, 2021.AI’s Senior Risk Advisor, interviews his former team member Hannah Becher about the real risks, and opportunities of LLM’s. Hannah is a financial technology expert that focuses on AI innovation, risk management, and financial crime prevention. She is also the Fraud Lead and a Subject Matter expert advisor at the Pleo AI Lab.
If you would like to learn more about real risks and opportunities of LLM’s, don’t miss this episode.
Here is a list of key points that were addressed during the episode:
- There are potential risks and dangers associated with the development and deployment of LLMs.
- With governance there are ways to make LLMs more transparent and reduce their risks.
- There are huge opportunities for us to use AI to solve big problems we have in the 21st century if used correctly.
- If AI’s IQ becomes more intelligent than the average human IQ, then we will have a difficult time logically controlling AI.
- In the future, we’ll see much more interaction with automatic machines than we already use.
Avishay: Welcome to AI Watch. I have Hannah with me and we’re going to talk about AI risks, specifically on LLMs, but we’re not going to stop at risk. We’re also going to talk about opportunities and everything around that to get Hannah’s views and opinions about this exciting area and its development.
Hannah: Thank you for the invitation Avishay. It’s amazing to be here at 2021.AI.
I’m Hannah, I work in Pleo. I lead our fraud and transaction surveillance teams. A bit of my background, I’m German, I somehow landed here in Copenhagen and worked for Danske Bank before, in a similar position. Everything from customer risk rating, modeling, a lot of modeling, risk data, and algorithms is exactly what I’m excited about, what I’m passionate about. Now, for over two and a half years already in Pleo, we are in an exciting journey as well.
With all the fintechs we’re looking at, where can we use AI? Where should we not use it? What really is it? How do we make it kind of consumable for our own users, but also for our employees?
So it’s a really exciting time, I think, to be in this space.
Avishay: Thank you for introducing yourself. I would like to get right down to it. Are you scared of LLMs?
Hannah: I would say it depends how you ask this question. So I would say to some extent, yes. If you were to ask me as a child, because as a kid, I was really afraid of mathematics. And if you break down what AI really is or the techniques, the modeling behind it, it’s algebra in an advanced version, right? So as a kid, I would have said, like, what, mathematics? Yes.
I am not afraid at all nowadays, because I think it’s a huge opportunity for us to use AI to solve some of the really big impressing problems we have in the 21st century. If we use it in the correct way.
Avishay: And the correct way… Let’s take it on the negative side. What would be an incorrect way to use it?
Hannah: For me, in my view, an incorrect way to use it is that you just implement it without any decision criteria, any success criteria. Basically without any governance or any framework, because then you’re using something that you’re not understanding, assuming that it’s just widely used, not only by experts and you’re not aware of the impact it might have. You’re not aware of what decisions humans might take based on the output of any algorithm that you use. It can be LLMs, it can also be different ones. So I think that can go pretty wrong.
We talk about, should we put this person in prison, yes or no? For the moment it’s still a judge that sits there and makes a decision. In the future, who knows?
Avishay: There’s always this discussion about humans in the loop when we’re talking about AI. Everybody is convinced that humans in the loop would help mitigate some of the risks with the use of AI, especially decision making or decision support systems that go, let’s say, a little bit out of control.
We are today at the era of, let’s say, 100 IQ LLM. What happens when we reach the 180 IQ LLM? What would be the human in the loop then? There are not that many humans with Einstein IQ.
Hannah: I think as soon as we design systems that are outsmarting us, and this is not my ego talking. I don’t have any problem with anything smarter than me, but I think to control something that is smarter than you becomes just logically very difficult.
So, yes, that would make me, I think, a little bit apprehensive, I would say.
Avishay: So is the solution to not even go there?
Hannah: I think that is not a solution, or it’s not a very likely outcome if we look into probabilities of innovations that are definitely there, that are really possible to not be implemented. It’s very low.
Avishay: My concern is the 180 IQ LLM.
Hannah: Why? What’s your biggest concern with that?
Avishay: It’s not the existence of such a machine or such an entity, let’s call it. My concern is with that landing in the hands of the wrong people and being used for the wrong purpose. I think most people would show the responsibilities, so most people would in the European case and also in the American case, at least in the moment, and also several other jurisdictions would obey the law. However, there are always those rogue and bad agents that would not conform, and those actually scare me.
And I’m thinking, how do we control that? How do we make sure that the best technology lands in the hands of the responsible people?
It’s impossible to prevent. You cannot prevent that. You cannot prevent this technology from living in the right, but you can mitigate and understand what happens if it does.
Hannah: I think on that front, definitely that’s also something that really scares me. How to prevent that is going to be very difficult, because in the end, at the moment, as long as you can establish your own API and maybe pay the subscription fee for OpenAI models, then you’re free to use it, right? There’s no other check in terms of you as a person using this technology or you as a corporation using this technology. That’s already an interesting question. Should it be allowed for everyone to be able to use this?
Avishay: I think it also becomes very interesting when we start talking, because everybody talks about when you tell people LLM, they think they have their own favorite LLM in mind. Most people will probably have Chat GPT in one way or the other as an API or 3.5 or four or turbo or whatever, one of those. There are plenty more. We all know that. And I see a world where there is fear. We’ll be using a few of them.
How many do you think we’ll land on? Also, if you agree, and if you do, how many do you think we’ll land on? And what would be those use cases? Where do we meet them?
Hannah: Definitely, it’s becoming an arms race, right? We see it with OpenAI and it’s so interesting to observe. Then Sam Altman already got in and was hired by Microsoft. They have their own model as well. And of course Apple has their own model as well. So I think there’s definitely an arms race going on.
Who will win? I mean no one can say.
That’s, I think, almost impossible to predict. But I see another effect rather than just the wrong people getting their hands on AI is that it creates a larger gap between socioeconomic classes because you have a lot of people that hear about AI, but I think there’s not enough resources to learn about it. But the way of transmitting the education around what it is, really? How can you use it? For some people now, they use ChatGPT to answer a message to their girlfriend or boyfriend on “ABC”. Or, say I have a question on a work related issue with my boss. Then you will get an output, but actually how to use this output for your own decision making? There is very little education. I think in general, humans are not always the best at making informed decisions, right? I think we need better tools and systems to enable us as humans to make decisions.
This goes also back to the question, how do we actually ensure, how do we make a decision? What’s the decision actually that we need to take here to make sure that AI can be used in a responsible manner?
Avishay: In what areas of life do you see these decisions coming and where do you think you will meet those LLMs? Do you see yourself using them at home, at work? In what scenarios? Do you think it’s going to be the same? Do you think it’s going to be different? How do you see it?
Hannah: Yeah, I think like this dreaming world but also a bit of a fearing world. Right?
I thought about this, this morning, actually, arriving here at the airport. It’s very interesting because in Denmark or Copenhagen you have these automatic cleaning robots that go through the airport. You don’t see them in any other airport so far. They run around, they beep a little bit, and as soon as a person comes too close, they go around them. And at the moment, of course, their sensors are like some object. But in the future, maybe you say to them like, “hey, sorry, I’m here, get out of the way.” And then they just turn around or go somewhere else.
I think we’ll see much more interaction with automatic machines that we already use. And I think when you ask me about where we see it, definitely in everyday life, for sure. I think we cannot forget also that if we call it the technology, it’s also agnostic to the regulation. That means already, that all the existing regulation applies to it, right? All the GDPR principles still apply. All the other laws and regulations we have also apply. They’re just not really fit for purpose maybe yet. But it becomes an interesting question, right?
Avishay: Hannah, thank you very much for coming. It was wonderful to have you here. We continue to publish content with knowledgeable and pleasant people like Hannah on AI Watch.
So please subscribe if you would like to learn more and participate in these conversations.
Thank you.
Hannah: Thank you for the invitation.
AI Watch Video Newsletter
Get the latest know-how from those in the know. Sign up for our AI Watch Newsletter and receive the latest insights from AI experts.