Executive Insights, SEPTEMBER 2024
My perspective on open-source AI
MIKAEL MUNCK
CEO AND FOUNDER, 2021.AI
AI Governance: Regulation and Innovation
A recent article in The Economist expressed concerns that regulating potential future problems for new technologies like open-source AI (here with focus on Large Language Models – LLMs) before they even happen will stifle innovation.1 I respectfully disagree. Regulation, when done thoughtfully, isn’t the enemy of innovation – it’s the foundation upon which sustainable progress is built. In fact, I’d argue that the right kind of regulatory framework can actually accelerate the open-source AI revolution.
The concern that the EU is over regulating AI is misplaced. In reality, regulation and innovation are not opposing forces. They can, and should, go hand in hand. Strong AI Governance is the key to sustainable and responsible growth. Neglecting AI Governance can lead to disastrous consequences.
By providing clear guidelines and ethical boundaries, regulation can bring a stable and predictable environment that encourages Responsible AI development and attracts investment. Additionally, the notion that regulation should only address “known harms” ignores the dynamic and rapidly evolving nature of AI. We must proactively address AI risk and unintended consequences, rather than waiting for harm to occur before action.
I believe open-source AI can unleash incredible progress, but without proper oversight and regulation, it can also be exploited for harm. The potential for misuse, bias, and unintended consequences is undeniable.
The role of AI Governance is not about stifling innovation. It’s a proactive and strategic approach that empowers organizations to leverage the full potential of AI while minimizing its downsides. By integrating ethical considerations, risk management, and transparency into AI development and deployment, we can create a future where AI serves as a force for good across all business sectors.
The EU AI Act’s principle of “human oversight”2 echoes this approach. AI should serve us, not the other way around. To ensure this, we need to empower those shaping AI’s future with the right tools and frameworks. It’s about fostering an environment where human judgment guides AI innovation, ensuring its potential is harnessed for the benefit of all.
California’s recent SB 10473 reinforces the growing recognition of the need for Responsible AI and AI Governance and thoughtful regulation. It’s a much-needed step in the right direction.
This reminded me of a stark prediction from Gartner that by 2027, 60% of organizations will fail to realize the anticipated value of their AI use cases due to incohesive ethical AI Governance frameworks.4 This resonates deeply with me. While AI Governance is often overlooked and underfunded, it’s the absolute key to unlocking AI’s full potential.
Ethics must be a core part of the conversation. The principles need to be translated into practical guidelines that people will actually adopt, ensuring adherence to ethical standards and regulations. I believe that regulation and innovation aren’t mutually exclusive; they can thrive together.
As AI continues to evolve, Responsible AI and AI Governance and appropriate regulation will only become more critical. We all have a collective responsibility to ensure that AI technologies are developed and used for the benefit of all. By embracing a proactive and collaborative approach, we can foster trust and build a brighter future by ensuring Responsible development, implementation and operation of AI.
Source: 1The Economist 2EU AI Act Article 14 3The Verge 4Gartner
You might also like…
From guidelines to implementation: Ensuring ethical AI use with robust solutions
As AI becomes an increasingly integral part of our daily lives, its potential impact on…
Understanding the EU AI Act penalties and achieving regulatory compliance
The EU AI Act prioritizes the ethical and responsible development of AI…