AI WATCH EPISODE #21

Pilot to production: Why most AI projects stall

By Ditte Stage, Bjørn Olesen

Listen on Spotify

Watch on YouTube

Welcome to AI Watch Episode 21!

Uncover the common challenges of AI projects with Bjørn Olesen, who shares insights from over 1,100 client meetings. Discover why many AI projects fail to reach production, the growing importance of AI Governance in development, and how to bridge the gap between legal and technical teams. Learn how to successfully navigate the constantly changing world of AI and drive your projects to success.

VIDEO TRANSCRIPT

Pilot to production: Why most AI projects stall

Ditte: Welcome to this episode of AI Watch. Today I have with me Bjørn Olesen, a sales director here at 2021.AI. In the past four years, you’ve had over 1100 meetings with clients and potential clients, with about 300 of those being in the last 12 months. That’s impressive! You’ve been busy.

Bjørn: Yes.

Ditte: So I’m thinking, who knows better about what’s happening with companies right now and their AI projects? You do. Today we want to talk about what companies are experiencing with their AI projects, and why, even though they’re motivated to do something with AI, their projects are stalled or blocked. You have some interesting insights on that. We can also talk about the shift that’s happened in the last four years, and how that impacts what our clients are asking for and what’s challenging them, in terms of their AI literacy, their expectations, and the push from their CEO or board to get AI projects up and running. So, Bjørn, in broad strokes, what are you seeing with the clients you’re meeting with? Why are they having problems getting their AI projects done?

Bjørn: I see a lot of clients facing challenges in their AI projects. It’s easy for them to build a pilot. We could almost set up some kind of GPT within this meeting that could act like me. So pilots are easy, but moving this into production and scaling it, that’s the biggest challenge. The most generic challenge is that pilots are easy, but taking those initial good results and scaling them, moving into production, that’s the main challenge. One of the blockers is a lack of governance. Governance is a wide term, but in the LLM and AI world, governance is understanding what the AI is doing, because other parts of the business need to understand this. I’m experiencing a gap between legal, risk, compliance, and the people developing the AI. Those two groups need to talk, and they don’t understand each other well. Explaining the foundation of the LLM model that a project is built on, and what regulations it complies with, is hard for a developer or data scientist. On the other hand, the legal, risk, and compliance teams don’t have the technical background. They understand law and risks. So, the projects, the pilots are easy, production is hard. Legislation, regulation, compliance is a blocker. The gap between those two groups is a problem. That’s the main thing I see.

Ditte: There’s a report from Deloitte supporting what you’re saying. We’ll show it on the screen, but a large majority of organizations have deployed less than a third of their GenAI experiments into production. This is exactly what you’re saying. You have all these pilots, but they’re not getting into production. You’re claiming that’s because they haven’t prioritized governance. That’s supported by Deloitte, saying three of the top four things holding organizations back are risk, regulation, like the EU AI Act, and governance issues. The governance talk is interesting because few people come to us—though I think larger organizations are starting to, you’ll tell us in a minute—but we still have this request to get AI projects done, to show that you’re doing something with AI. They’re thinking, once we get that, then we can talk about governance. Am I right? What’s the problem with that?

Bjørn: First, it’s difficult backtracking AI projects. If you have a product, and somebody has to explain the original dataset, how it comes to life, what checks were done, how we ensure there was no bias, backtracking all of that is a big task. We all remember when GDPR came. Backtracking on old legacy systems, what data we have stored, was complicated and took a lot of time. It’s the same with AI. If you’re ready to move into production, but have to backtrack to get your governance in order, that’s difficult. It’s much easier to have your controls and governance in place while building, so what you need to comply with is checked continuously. That makes productionizing AI projects easier. You’re right, it’s large organizations that are focused on this, because they know the EU AI Act is coming. There’s regulations all over the world, so we need a structure.

Ditte: If you’re in a client meeting, and they want an AI project done, but haven’t heard about governance, is this something—we’re being honest—if you mention the importance of governance from the start, does it scare them off? Do they think their project is getting more expensive, that they can’t handle this?

Bjørn: The short answer is yes. What I’m seeing is companies aren’t proactive, they’re reactive. That’s why they can’t move things into production. They realize late in the game that this isn’t governed correctly, or they’re not willing to take the risk. It’s not proactive. That means it’s hard to sell, because people aren’t proactively building it in. When we do projects, we build the governance underneath. When clients realize it’s important, we show them it was done, but selling on it is difficult.

Ditte: This is interesting. There’s a report by Gartner stating this, saying by 2027, 60% of organizations will fail to realize the value of their AI use cases because of incohesive ethical governance frameworks. That’s exactly what you’re pointing at. This means you can have good ideas for using AI, and get it into production now, but long-term you won’t have success without that governance. How excited—not trying to be salesy—but how excited are clients when you tell them, “Don’t worry, we considered governance?”

Bjørn: They don’t realize yet that this is a problem, and aren’t excited about a solution to a problem they don’t know exists. From a sales perspective, it’s something you need to tread carefully with, because it’s not their main focus. Their focus is the use case. It’s something that will come along the journey. It’s good to be prepared, it’s good to have those things, but people are focused on having AI use cases, typically because their board would like it, or their owners. They want AI use cases. That’s the main focus.

Ditte: We’re talking about these challenges that companies experience with legal, where legal is saying no—of course, with good hearts, protecting the company—but do you think this tendency where projects are showcased to legal and then they get a no has grown in the last few years? Is it increasing?

Bjørn: Yes, because before Chat GPT, companies were running machine learning projects with a structured dataset that you brought in on your own, so you knew what the model was built on. It also had a clear output. It was intended to be used for churn detection, credit scoring, whatever. Now we’re in the LLM space, where somebody else built the AI, trained on internet data, and they would have a hard time explaining what data went into each answer. We’re building on top of something that somebody else brought in, and the use cases aren’t a single output. It’s “Here’s this LLM tool for all workers. You can use it to write posts, generate images, and take meeting notes.” The use cases are in the thousands. That means the risk is higher. If you want to client-face it, AI has also been something that was inside the business. Now we want AI-enabled chatbots. We’re presenting it to clients. There is more being blocked, but there are way more use cases, and way more money being poured in. It’s not only a data scientist building AI solutions, it’s you and I who build up these things on our LLMs. It’s coming from all sides. The next iterations of LLMs are capable of going deeper into the business, handling client data, HR data, payroll data. The easiest, fastest, safest way is just saying no.

Ditte: What’s the biggest difference between meetings you had three years ago and the meetings you’re having today? What’s made a big difference?

Bjørn: I mentioned before and after Chat GPT, there’s a big difference. It’s like the iPhone moment. Before the iPhone, telephones had buttons or a stylus. Then this came along and everything else seemed old fashioned. In the AI world, before Chat GPT, meetings were a lot about teaching companies what AI can do, then talking about what projects could be interesting for them. Now everybody knows what AI can do, because it’s been easy for any user to write or update their resume with AI. Meetings are much more about how we scale this, how we move this in front of clients. Governance is starting to move back in there. Things that were hard to talk about before Chat GPT – it’s also moved from one kind of stakeholder in the company, who had this idea or was passionate about technology.

AI Watch speakers

Bjørn Olesen

Bjørn Olesen

SALES DIRECTOR, 2021.AI

Bjørn has over 15 years of experience in IT implementations, serving a diverse range of clients. At 2021.AI, Bjørn specializes in the legal industry, enabling our clients to optimize and improve their operations.

You might also be interested in…

Webinar - AI Security in practice