
-converted-from-png.webp)
Eight months after our last talk, Peter Sondergaard returns to AI Watch to explore how far AI has come — and where it’s taking us next. We unpack the evolution of structured innovation, digital resilience, and trust, and look at how agentic AI and governance are shaping the enterprise landscape in 2025.
Hi everybody and welcome to this episode of AI Watch. With me here today is you, Peter Sondergaard, former Global Head of Research at Gartner and still a very active player in the space with your fingers in many parts when it comes to the evolution of technology and still advising quite a lot of boards and C-level people in this field. I'm personally very excited to be here again because we sat down eight months ago, talked about the development of the AI space and we're going to pick up on a few of these. Happy to have you here. Yes, same here. Thank you. So let's get started on what we talked a bit about last time. And one of the things you mentioned was AI is exciting, but if you really want to get value out of it, you have to approach this with structured innovation. I think that's, at least from our observation, proven true. But also another dynamic that's arrived is: with generativity, especially, there is a grassroot innovation kind of happening. What's your take on that? Yeah, I think, so you're right. There is always going to be a top down and a bottom up approach to technology. And that usually actually is an indication that the technology is relevant and important. And I think we've actually at 2021.AI also talked a lot about the bottom up one because you could also look at this as being a Shadow AI activity. Which isn't necessarily negative. I mean, people often view Shadow AI as negative - it is not. It's an evolution and usage of technology driven by employees and that could be very valuable for an organization. And so I think we're actually going to exist over the next couple of years in a world in which we have these two forces, the top down force driven by the board and the CEO and the executive team and then the bottom up force driven by employees. And the challenge for organizations now become a necessity to ensure that the middle management layer is ready to catch both what comes from the top and what comes from down the organization. And that requires development of managers because very often, historically, we've actually seen that large IT projects, large digital projects stall, because middle management is not ready to actually do the broad implementation or deal with employees that are creative of using the technology. And so when you talk about the innovation process being centered in the business, it's a question of centering it around the middle management team and ensure that they have the capabilities, the leadership skills to actually move the company forward. And also the things like their objectives reflect this. Interesting. When we take this up into like a board or management, what are the actions that you need to take to get your middle management empowered, enabled, get the capabilities? You need to make innovation a requirement for middle management as part of their annual objectives. And they need to also not view the world as being annual, but the world as being continuous so that they are actually required to show and demonstrate continuous improvement in terms of what they do at their unit and responsibility level. And so you anchor things, one in the right capabilities, one in the culture, but then also more tangibly perhaps in metrics and objectives that they are measured on. So being able to actually unfold that is not innovation for the sake of innovation, but really we set out to do this. The use case was X, the expected outcome was Y, and are we actually getting to Y? Correct. And there's nothing new in this. This was true 20 years ago. It's it's true not just with AI, it's true with any technology level innovation and any innovation in broader terms that happen. Excellent. So now that's guess is the sort of the positive movement that people are actually innovating with this technology. Another element we talked about last time was a bit of the sort of the change in the risk landscape in the world and that some of these activities were moving from almost exclusively being a cloud agenda into something more sovereign. How has that moved in the last six to eight months? Yeah, that I mean, we've obviously seen a substantial evolution of this whole topic of digital resilience, digital sovereignty. It has many words that describe it, but the geopolitical situation, and that's not pointing at any individual region of the world, is such that every company at the board and CEO level has a responsibility right now to think strategically about the resilience of the platforms that they have in place. And that has moved a lot. I think it is clearly an issue that is on board level everywhere. It varies, obviously, because the risk aspects are different. But the issue of resilience is one that think most people are very focused on. I think we get to a situation in which the consideration of resilience at board level, and therefore also with CIOs, really embody probably three, four layers. So imagine you have an onion and the inner part of that onion is really your hardware infrastructure and therefore the resilience of the hardware, which isn't just the servers, it's also fixed communication equipment and lines. It's the building that the computers reside in. And there are organizations that have to think about resilience and digital sovereignty at that level. Which is obviously far more complex. That's when you get to look at air gap systems and true sort of containment of your environment. Then there's the whole operational layer. The operational layer is all of the software technology, security level technology, cybersecurity tools that reside, that surround your infrastructure, as well as also the people that run the systems. And there's a level of digital sovereignty and resilience around that. And then there's probably the most important for all organizations to consider, which is resilience around the data layer and the data that sits in the organization. Because one could argue that, you know, as long as I at least control my data, I have some control over what is strategic for my organization. But there's a lot of considerations there. And then there's a necessity at the fourth level to consider: The legislative regulatory-based environments. The the whole aspect of digital sovereignty is to think through where is my company? What is most important for my company? And I believe most companies will end up saying it's the data that's most important for me. I can't afford that the server isn't necessarily in my country. So in many aspects, I think we will see a lot of consideration around this. It's going to drive a change in IT investment. It is going to drive a change and a significant increase, for example, in hardware spend, not just driven by the increase in GPUs and what have you hardware-wise, but actual servers where people perhaps in some areas retreat a bit from the cloud. Interesting. And in doing so, where does the AI agenda fit in? Because I can protect my data in my environment, but as AI has largely been a cloud agenda, now I need to essentially pull AI back to my data instead of thinking, I send my data to AI? Yes, I think there could be a slide back for certain environments, so that I slide back from the cloud into perhaps a more physical location. Simply because I need to protect the risk level of my business. AI is an element of this, so it runs through all aspects because in parallel with this happening, we're also seeing the establishment of almost a new architecture and organizations that is an AI native architecture that's being implemented. And so this aspect has to be thought through not just isolated as digital sovereignty of the four areas that I spoke about, but also in the context of an evolution of the actual architecture in which AI plays a role in the entire technology stack. Okay, so that was... innovation it was the the sovereignty part now that the last one I I wanted to talk about is: We talked governance last time and and how governance in the context of AI was was moving or transforming from a sort of compliance exercise into something of a leadership exercise. Have have that actually unfolded as you anticipated? I'd probably correct ourselves and say it is going to happen, but I think we're in the period that since we last spoke, the ball has also moved a little bit in terms of what AI is in the perception of large organizations. So now there has to be the consideration of how does this fit within an agentic AI architecture, in which of course there's a lot of work going on in terms of the protocols and the workflows that underpin that architecture. And part of that workflow ends up being a governance layer. And in order to ensure that that governance layer, because we're talking about real-time environments that operate unmonitored and without necessarily human intervention, is we need a very robust understanding of what governance is, which becomes therefore a necessity among leaders to be focused on that. So, it is going to be important. I think because the market has shifted, we don't yet see many senior leaders being very versatile in terms of explaining what they're doing within their business unit around governance, except in the old classical way of risk and compliance. But we will see this expanded substantially as we move into a world in where many functions don't have any employees that are humans, but the employees are machines. The requirement to ensure and monitor this is far exceeds what we have today. So the sort of complexity that agents will introduce from let's say classic chat button to something more functional and with agentic meaning I can act. That's where the governance agenda is likely to really pick up. And that's also why we'll be a lot slower than the technology providers say it will be, because this is not just implementing a piece of software. You're reworking tasks and you're seeing task decomposition. You're looking at multimodal environments, so you're not just operating with text or audio or video, you're embodying everything in it, and you're changing the overall workflow environment to something that may be vastly different from what you have today. Nobody does that in a year, nobody. True. When you say that, I... I'm left with, last time we talked quite a lot about the CEO and the CIO agenda, but what you're describing with the agent sounds more and more like a COO type agenda, because now we're getting into the machine room of the company and beginning to rethink like proper processes. Is that also where we can expect perhaps the governance demand to unfold? Yeah, that's why I think governance will be a leadership capability among all leaders, including therefore the COO. And this is partially why I think it's important for organizations really now to think through this aspect of task decomposition and workflows, because we have yet to see the benefit of some of this. This is why we've seen surveys recently, the well-quoted MIT survey in which ⁓ only 5 % of the organizations really have workable solutions out there at scale. I've been in the IT industry for too long when I say this, but it sort of reminds me of in the 1980s with PCs, we had a problem in terms of using the hard disk space available because you would store segments of code or data, but it wouldn't be a linear storage on the disk, so there would be holes on the disk. And somebody came up with defragging software. And defragging software moved the bits around the hard disk so it freed up more hard disk space. I think we need to defrag workflows because what is happening now is you're getting AI solutions that benefit for a number of employees but they gain 20 minutes. What do they use those 20 minutes for? Well, some people will do more work. Some people will spend time on Instagram. Some people will buy shopping things because they have that extra time. In essence, therefore, AI doesn't deliver any productivity benefit. It doesn't deliver any productivity benefit until we rework the business process and how tasks are composed around jobs. And that's a more complex thing than just implementing copilot. And that's why we're not seeing the true benefits yet because companies have not reworked their workflows. Defragging of workflows. That's a topic we're gonna pick up in the next episode. Defragging workflows, nice one. Agentic is, I guess, an agenda point we're also gonna revisit. When you talk with C-suite, what are the most important things that you recommend they consider when they're sort of in an agentic future for their company? What are the key topics to consider? Well, apart from this aspect of you have to look at the workflows because otherwise what you're doing is you're putting, and that has value, you're putting agents on top of your existing old workflow models, your existing CRM workflow, your existing ERP workflow. And that's not our end point. Other than that, this has become a of a classic technology evolution and implementation challenge. So the issues are not necessarily technology focused. The issues therefore: right now are really more a question of the classical things that are no surprise to anybody here that would listen to this. It's culture, it's budget, it's talent, it's organizational structure, it's your data. And then it's an understanding of risk and governance. And those apply to any project that anybody has done over the last 50 years with technology, because that's where we end up when we have to scale. And that's where people are ending up right now. Good point. Another topic in this is underlying all of this. We've seen enterprise software historically, and you even mentioned some of them are beginning to turn agentic, but what is it doing more fundamentally to the software development industry, we're seeing help with developers, but also just the architectures as a AI first type of architect. What is really happening here? I think there's the really fascinating aspect right now as we sit here is the speed at which software development tools are embedding AI into the tool set and therefore into the entire software development lifecycle process. That changes the economics of developing software. And I think it's an underestimated and perhaps we don't understand and know what the second and third order of the issue of this is. But if you take a simple calculation and say how many developers are there in your country? And you come up with a number and you then say the average gain that a developer on an annual basis delivers to the company, let's just make up a number, but anybody can sort of try to figure out what it is for their environment, for their business, or for their country. Let's say it's $100,000, right? That's the economic gain per year. If I then use AI tools in the software development process, so that I improve the productivity of the developer by 100%, okay? Say that in your country there are half a million developers times now $100,000 because that's the net increase from not using AI tools to using AI tools. And then I say what is therefore the annualized gain financially for that country if it's $500,000, it's $50 billion. And if you take the total number of software developers that exist in the world, the economic gain of deploying artificial intelligence in the software development lifecycle is equal to the GDP of France. Wow. That is an underestimated aspect of things and I think we're not yet there in the tool sets. Productivity gains vary in terms of what companies are seeing. Some are not seeing much, some are seeing significant. This will improve as a natural thing of the evolution and that changes the economics of what software does to a company and to the economy. And I guess the question that comes up in my mind is, are you seeing companies take that 100 % increase in productivity, if that's what we're looking at, to hire half as many developers or to deliver on your roadmap twice as fast? Yeah, I think we're seeing both - driven by the economics of whatever governs that organization. But the reason why I quote this ⁓ simple formula or an example, because you can take the number of developers you have. The economic benefit that they deliver, the benefit percentage increase of productivity, and I will then ask you, do you want to get rid of your developers or would you want to really keep them because the economic benefit of each of those developers go up by 100 %? Or do you want to decrease it so you just deliver what you've always delivered? That's decision somebody can take. I know which side I would be on. I'd keep the developers and maybe even hire more. If I could prove the business case that this is actually true, then having more developers would be better. Good one. So when we then look at all of this amazing opportunity that lies ahead of us, ⁓ we're also relying more and more on a technology here to make, in the example before, code at the same quality as we saw before, because otherwise you don't get the productivity uplift or do the analysis. In all of that lies a high degree of trust in this technology. How do you see that even have developed up until now since we talked last? Because it was also a topic then and where is this going? I think there's one scenario we all have to treat as being very plausible, which is we're moving towards a world without trust. so if... if I can't trust what I see, if I can't trust what I hear, if I can't trust what I read, that becomes a challenge. And so I think it's a very real topic that needs to be on the agenda of any organization. because part of that would then mean, why would I then be able to trust your brand? Why would I be able to trust the people in your organization? If we have eroded road trust so much that we can't trust anyone, then why can we trust your company? And so I think there's an argument that says maybe we should actually start to track how people trust us. So almost like a trust index. mean, you have NPR, which is a net promoter score, right, in terms of tracking what people think about you, but that's not about trust. Maybe we should evolve this a little bit more so that we know that. But clearly that, I think, is a tremendous societal issue. And I think it requires us to go back into those things that make us human. And in fact also focus on making sure that they become more apparent because at least we can still trust sitting over across from a carbon version of all of us so that we know that it actually is a human version. And I think that will become much more important, this deep rooted aspect of trust between humans, will become even more important because the environment around us seems to be less and less trustworthy. And I guess that becomes a challenge in even if we get all the growth, if there is no trust, then it's hard to do business. It's hard to get the customers to line up because there is a natural... And this therefore becomes why it is so super important to have in place that governance layer for AI, because part of it is simply just to demonstrate that you're actually managing this appropriately as a business, but also perhaps through that we're able to catch the aspects that have not just a security risk for us, but also maybe a trust risk for us. So trust is built by proving a process. Absolutely! And then by going back to that age old human nature that defines us. Trust your gut. Yeah. And trust the people that you know you can trust. Yeah. We're now verging at the end of 25. What would be your biggest bets on sort of the next six to eight months? What would be the things that we would be talking about in eight to six months? Yeah, I think people are obviously going to be ⁓ piloting. ⁓ the agentic AI based environments. think they'll probably start doing this mostly in sort of customer service or marketing or sales environments, but they're probably gonna start using that. We'll see that bottom up deployment ⁓ using ⁓ some of the classic now, classic, they're not that old. ⁓ vibe coding tools that exist where certain people can start to actually develop their own agents, which is happening now. And so I think we'll have gone through some of that. I think people eight months from now will still be focused on, how do I actually do this at scale when it comes to a Gentic AI? ⁓ Whereas when you look at sort of more classical now, classical gen AI, ⁓ implementations ⁓ and also the more machine learning based environments. Those we are going to start to see evolve at scale much faster. I think with the machine learning side of things, we'll start to actually see machine learning in more robotic oriented environments. So we'll see it will start to creep into warehousing environments and other areas ⁓ where in fact you will have ⁓ semi-autonomous or maybe even autonomous robots that can operate with context in the environment that they operate, which is what we've seen, you know, announcements lately about. I think it's highly likely that, you know, by five years out, you know, many managers in warehousing environments will manage one human and 10 robots or equal. I mean, we're definitely heading that way. Interesting. It's a really interesting time ahead and I almost can't wait to sit down in a bit of a time and see where 26 is kicking off. Thank you. Thank you very much Peter. This was a great session and thank you all for listening. We hope you got as much out of it as we did and we look forward to pick this up again in the spring of 26. Thank you.
Stay up to date on our latest news and industry trends