
In this series on emerging AI risks, I’ve been exploring the hidden implications of AI, often called “Shadow AI.” This term refers to the unauthorized use and implementation of AI tools and systems in an organization without the knowledge or approval of the organization’s designated AI authority.
In the first blog, I explored the rise of Shadow AI and what it means for organizations. The second blog discussed how to manage the growing complexity of AI systems and models.
In this post, I investigate the potential dangers of Shadow AI and propose practical frameworks for managing the emerging risks associated with Shadow AI.
Recent developments in AI and AI Agents have raised significant concerns about the nature of increasingly autonomous operations. Historically, for example, the deployment of AI chatbots in customer service has led to instances where these systems provided unauthorized information or made decisions outside their intended scope.
Looking forward to a future with much more intelligent AI Agents that will perform multi-step tasks independently, we are also looking at a substantial increase in the associated risks of unintended actions and behavior.
Eric Schmidt, the former CEO of Google, warns that within five to ten years, AI could reach a level of autonomy that might pose a serious threat to humanity. Schmidt advocates for an international organization to guide policymakers in navigating the risks and opportunities posed by AI.1
Schmidt’s concerns highlight a critical issue. While AI development continues at breakneck speed – with advances in areas like Large Language Models (LLMs) and autonomous systems occurring almost weekly – the ability to implement proper security and safeguards lags behind.
Just recently, we learned that OpenAI disbanded its “AGI Readiness” team, which was tasked with advising the company on its ability to handle increasingly powerful AI and the world’s readiness to manage that technology. Miles Brundage, the head of the team, stated in his departure announcement that “neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” 2
This news, coupled with the disbanding of OpenAI’s “Superalignment” team earlier this year, raises serious concerns about the prioritization of AI safety at the forefront of AI development. It seems that despite significant financial investment and public commitments, OpenAI may be deprioritizing crucial safety research and oversight in the rush to develop and deploy cutting-edge AI systems.2 This, in my view, is indeed concerning.
As someone who’s been deeply involved in this field for some time, I see a pressing need to address the potential risks for unintended use and derived consequences. How do we ensure AI systems remain aligned with human values as they continue to become more complex and autonomous? This is a question we need to grapple with before we implement our AI projects, during the planning phase, and not as an aftermath. In short, we need to make safe AI and not make AI safe – the latter is so much more difficult, risky, and costly.
For sound business and short-term economic reasons, many organizations are rushing to implement AI without fully understanding the risks that might be incurred. A recent McKinsey survey of more than 100 organizations with more than $50 million in annual revenue found that 91 percent of respondents doubt their organizations are “very prepared” to implement and scale the technology safely and responsibly.3
This lack of preparedness is a serious concern. The crucial factor isn’t about developing and using AI, but how we implement, control, and manage it. We need to ensure AI is used responsibly to prevent harm to individuals, our environment, and society.
The potential threats of Shadow AI are real, and must be addressed. Fostering a culture of Responsible AI usage and promoting transparency and accountability are essential to mitigate the risks of Shadow AI. However, this cannot stand alone. Continuous monitoring and evaluation of AI are key to ensuring that the benefits of AI outweigh its risks. A practical approach for managing and controlling AI risks must also include the following points:
The time to act is now. Organizations must take concrete steps before Shadow AI becomes an overwhelming challenge as more and more AI creeps in unauthorized from all corners of the organization. Forward-thinking organizations will realize the need to make safe AI and not make AI safe – as mentioned earlier, the latter is much more difficult and costly.
Stay up to date on our latest news and industry trends