
After years of negotiations, the European Commission has finalized a common approach to Responsible AI. Here are the natural next steps.
Global enterprises and mid-market companies are rapidly innovating and leveraging emerging technologies like AI to grow revenues, reduce costs, increase profits, and discover new value in product/service offerings. But young, Silicon Valley-based competitors have disrupted legacy players with a fearless approach to scoping markets, value chains, and customers. Platform value creation and growth hacking models from “Big Tech,” operating across all industries, have reversed every commercial trend we have ever known: The customers are now products, and the product is the customer.
Harvard Business School Professor Shoshana Zuboff introduced us to the concept of “surveillance capitalism” – which arose as advertising companies, led by Google’s AdWords, saw the possibilities of using personal data to target consumers more precisely. Surveillance capitalism represents an economic logic that has hijacked the digital for its own purposes.
The logic of surveillance capitalism begins with claiming private human experiences as raw material for production and sales. It wants your walk in the park, your online browsing, your hunt for parking, and even your voice at the breakfast table. These experiences are translated into behavioral data. Some of this data may be applied to products or service improvements; the rest, valued for its predictive power, is sold to business customers in markets that trade human futures.
In consequence, new political megatrends have emerged: Privacy –National security – Surveillance – Nationalism –Investor activism – Sustainability – Responsible thinking. The number of operational activities with which the CxO-level and BoD-level must be familiar is growing rapidly. When you add the exponential development in emerging technologies, innovation, and digital business models, you have the future board work in a nutshell. And the overall questions become:
How do you maintain your market position? What is required from board executives? And what are the board’s responsibilities?
When it comes to emerging tech and Responsible AI, there are three core political dimensions that I recommend every senior executive in Europe follow at the moment:
In this blog, I will touch upon Dimension #1: “A Europe Fit for the Digital Age”—the overall European strategy addressing Big Tech. Future blogs will follow up on #2 and #3.
The European Commission has laid out a legal framework that ensures the safety of users online, establishes AI Governance with human rights at the forefront, and maintains a fair and open online platform environment. The framework’s primacy scope concerns “gatekeepers”, defined as platforms with over 45 million users, and will be addressed in two packages: the Digital Market Act (DMA) and the Digital Service Act (DSA).
The logic behind the framework is valuable for every senior executive to understand because that logic reveals the political thinking behind AI regulations, laying out the commercial scope for all of us going forward.
The European Commission made its initial regulatory proposals in December 2020. On 25 March 2022, a political agreement was reached on the Digital Market Act (DMA), and on 23 April 2022, the Commission agreed on the Digital Service Act (DSA). Following the adoption of the Digital Service Package at the first reading by the European Parliament in July 2022, both the DSA and the DMA have been adopted by the Council of the European Union, signed by the presidents of both institutions, and published in the Official Journal of the European Union.
The DMA entered into force on 1 November 2022, and the DSA will enter into force on 16 November 2022. The DSA will be directly applicable across the EU and will apply fifteen months after entry into force or from 1 January 2024 onward—whichever comes later.
The two acts are heavy reading, but I’ve listed the highlights below. As you go through it, please keep in mind that the next step is to apply the same regulatory logic in terms of Responsible AI in general for companies.
Safer online space for users
Harmful content and disinformation
Very large online platforms must comply with stricter obligations under the DSA, proportionate to the significant societal risks these platforms pose when spreading illegal and harmful content, including disinformation.
Is your organization ready? If not, you must start to focus on how you will define and apply Responsible AI.
To prepare for a digital age in Europe, you should take care to ensure that your AI projects are:
AI projects should be transparent and explainable to all stakeholders, including developers and end-users, at the highest level of explainability possible. The decisions and actions of an AI project should be communicated, including how broader systems leverage AI.
AI projects should be fair and equitable to all humans. There should be a focus on avoiding and minimizing unfair and unexpected biases while also ensuring accessibility and universal design throughout the AI project lifecycle.
All AI projects need to be held accountable by humans throughout the AI project lifecycle. Human oversight needs to be applied with clear processes for auditability from end-users, regulators, and management within a company.
AI projects should be respectful of privacy and include thorough data and AI Governance. Embedding security in the design and deployment of AI projects protects users against unintentional and malicious harm.
AI projects should perform reliably and be robust over time. Stability ensures that user trust is not compromised and that disparate impacts are avoided in AI project outcomes. Reliable AI projects should consider the social and environmental impacts of deployment and operation.
Stay up to date on our latest news and industry trends