Executive Insights, JANUARY 2025

Shadow AI Series 3/3: The Shadow AI threat: Are you prepared for unexpected risks?

MIKAEL MUNCK

CEO AND FOUNDER, 2021.AI

In this series on emerging AI risks, I’ve been exploring the hidden implications of AI, often called “Shadow AI.” This term refers to the unauthorized use and implementation of AI systems and tools within an organization without the IT department or the teams responsible for data security knowing about it or having any control over it.

In the first blog, I explored the rise of Shadow AI and what it means for organizations. The second blog discussed how to manage the growing complexity of AI systems and models.
In this post, I investigate the potential dangers of Shadow AI and propose practical frameworks for managing the emerging risks associated with Shadow AI.

A future with autonomous AI operations and AI Agents

Recent developments in AI and AI Agents have raised significant concerns about the nature of increasingly autonomous operations. Historically, for example, the deployment of AI chatbots in customer service has led to instances where these systems provided unauthorized information or made decisions outside their intended scope.

Looking forward to a future with much more intelligent AI Agents that will perform multi-step tasks independently, we are also looking at a substantial increase in the associated risks of unintended actions and behavior.

Eric Schmidt, the former CEO of Google, warns that within five to ten years, AI could reach a level of autonomy that might pose a serious threat to humanity. Schmidt advocates for an international organization to guide policymakers in navigating the risks and opportunities posed by AI.1

Schmidt’s concerns highlight a critical issue. While AI development continues at breakneck speed – with advances in areas like Large Language Models (LLMs) and autonomous systems occurring almost weekly – the ability to implement proper security and safeguards lags behind.

Troubling signs from industry leaders

Just recently, we learned that OpenAI disbanded its “AGI Readiness” team, which was tasked with advising the company on its ability to handle increasingly powerful AI and the world’s readiness to manage that technology. Miles Brundage, the head of the team, stated in his departure announcement that “neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” 2

This news, coupled with the disbanding of OpenAI’s “Superalignment” team earlier this year, raises serious concerns about the prioritization of AI safety at the forefront of AI development. It seems that despite significant financial investment and public commitments, OpenAI may be deprioritizing crucial safety research and oversight in the rush to develop and deploy cutting-edge AI systems.2 This, in my view, is indeed concerning.

Is the AI industry neglecting critical safety concerns?

As someone who’s been deeply involved in this field for some time, I see a pressing need to address the potential risks for unintended use and derived consequences. How do we ensure AI systems remain aligned with human values as they continue to become more complex and autonomous? This is a question we need to grapple with before we implement our AI projects, during the planning phase, and not as an aftermath. In short, we need to make safe AI and not make AI safe – the latter is so much more difficult, risky, and costly.

The rush to implement AI: Move fast, but do NOT break anything!

For sound business and short-term economic reasons, many organizations are rushing to implement AI without fully understanding the risks that might be incurred. A recent McKinsey survey of more than 100 organizations with more than $50 million in annual revenue found that 91 percent of respondents doubt their organizations are “very prepared” to implement and scale the technology safely and responsibly.3

This lack of preparedness is a serious concern. The crucial factor isn’t about developing and using AI, but how we implement, control, and manage it. We need to ensure AI is used responsibly to prevent harm to individuals, our environment, and society.

Practical solutions for managing AI risks

The potential threats of Shadow AI are real, and must be addressed. Fostering a culture of Responsible AI usage and promoting transparency and accountability are essential to mitigate the risks of Shadow AI. However, this cannot stand alone. Continuous monitoring and evaluation of AI are key to ensuring that the benefits of AI outweigh its risks. A practical approach for managing and controlling AI risks must also include the following points:

  • Establishing clear AI Governance across your organization. You need clear structures and practical guidelines in place.
  • Implementing efficient AI system registration and continuous monitoring. You must avoid Shadow AI by keeping an eye on your AI.
  • Establish a clear and efficient AI Controls and Risk framework to guide the Responsible use of AI across your organization. Ensure compliance with regulatory frameworks like the EU AI Act, other AI policies, and ethical guidelines.
  • Implement an enterprise AI platform to consolidate your AI (and Responsible AI) efforts. Connect all stakeholders in your AI projects across your organization. Consider establishing an AI Center of Excellence to manage, monitor, and report on all AI activities.

The time to act is now. Organizations must take concrete steps before Shadow AI becomes an overwhelming challenge as more and more AI creeps in unauthorized from all corners of the organization. Forward-thinking organizations will realize the need to make safe AI and not make AI safe – as mentioned earlier, the latter is much more difficult and costly.

Mikael Munck

Mikael Munck

FOUNDER AND CEO, 2021.AI

Mikael is the founder and CEO of 2021.AI. He has 25+ years of experience in Technology and Financial sectors. Mikael was the Global Head of Technology and Operations at Saxo Bank. He is the Chairman of Copenhagen FinTech, an investor and a board member of several other technology companies.

You might also like…

You might also like - The rise of AI

Shadow AI Series 1/3: The rise of Shadow AI and what it means for your organization

AI tools and platforms are becoming increasingly easy to use, empowering individuals to leverage AI solutions for their specific needs…

READ MORE

You might like - Shadow AI: how to manage and control AI in your organization

Shadow AI Series 2/3: How to manage and control AI in your organization

The world of AI is changing fast. We’re not just using more AI, we’re using more complex AI. This creates a challenge referred to as Shadow AI…

READ MORE