Executive Insights, November 2022

A Europe fit for the digital age

Kathrine Stampe Andersen

Vice Chair of the Board, 2021.AI

After years of negotiations, the European Commission has finalized a common approach to Responsible AI. Here are the natural next steps.

Global enterprises and mid-market companies are rapidly innovating and leveraging emerging technologies like AI to grow revenues, reduce costs, increase profits, and discover new value in product/service offerings. But young, Silicon Valley-based competitors have disrupted legacy players with a fearless approach to scoping markets, value chains, and customers. Platform value creation and growth hacking models from “Big Tech,” operating across all industries, have reversed every commercial trend we have ever known: The customers are now products, and the product is the customer.

Harvard Business School Professor Shoshana Zuboff introduced us to the concept of “surveillance capitalism” – which arose as advertising companies, led by Google’s AdWords, saw the possibilities of using personal data to target consumers more precisely. Surveillance capitalism represents an economic logic that has hijacked the digital for its own purposes.

The logic of surveillance capitalism begins with claiming private human experiences as raw material for production and sales. It wants your walk in the park, your online browsing, your hunt for parking, and even your voice at the breakfast table. These experiences are translated into behavioral data. Some of this data may be applied to products or service improvements; the rest, valued for its predictive power, is sold to business customers in markets that trade human futures.

In consequence, new political megatrends have emerged: Privacy –National security – Surveillance – Nationalism –Investor activism – Sustainability – Responsible thinking. The number of operational activities with which the CxO-level and BoD-level must be familiar is growing rapidly. When you add the exponential development in emerging technologies, innovation, and digital business models, you have the future board work in a nutshell. And the overall questions become:

How do you maintain your market position? What is required from board executives? And what are the board’s responsibilities?

When it comes to emerging tech and Responsible AI, there are three core political dimensions that I recommend every senior executive in Europe follow at the moment:

  1. Politics: A Europe Fit for the Digital Age
  2. Leadership: Algorithmic Leadership
  3. Megatrends: The Global Power Tech Race between the US, China, and Europe

In this blog, I will touch upon Dimension #1: “A Europe Fit for the Digital Age”—the overall European strategy addressing Big Tech. Future blogs will follow up on #2 and #3.

A Europe fit for the digital age

The European Commission has laid out a legal framework that ensures the safety of users online, establishes AI Governance with human rights at the forefront, and maintains a fair and open online platform environment. The framework’s primacy scope concerns “gatekeepers”, defined as platforms with over 45 million users, and will be addressed in two packages: the Digital Market Act (DMA) and the Digital Service Act (DSA).

The logic behind the framework is valuable for every senior executive to understand because that logic reveals the political thinking behind AI regulations, laying out the commercial scope for all of us going forward.

The moment to act is NOW

The European Commission made its initial regulatory proposals in December 2020. On 25 March 2022, a political agreement was reached on the Digital Market Act (DMA), and on 23 April 2022, the Commission agreed on the Digital Service Act (DSA). Following the adoption of the Digital Service Package at the first reading by the European Parliament in July 2022, both the DSA and the DMA have been adopted by the Council of the European Union, signed by the presidents of both institutions, and published in the Official Journal of the European Union.

The DMA entered into force on 1 November 2022, and the DSA will enter into force on 16 November 2022. The DSA will be directly applicable across the EU and will apply fifteen months after entry into force or from 1 January 2024 onward—whichever comes later.

The DSA and DMA at a glance

The two acts are heavy reading, but I’ve listed the highlights below. As you go through it, please keep in mind that the next step is to apply the same regulatory logic in terms of Responsible AI in general for companies.

  • Algorithmic accountability: The European Commission, as well as member states, will have access to the algorithms of very large online platforms.
  • Swift removal of illegal content online, including products and services: A clearer “notification and action procedure” will empower users to report illegal content online and require online platforms to act swiftly.
  • Online protection of fundamental rights: Stronger safeguards will ensure that messages are processed in a non-arbitrary, non-discriminatory manner and with respect to fundamental rights, including freedom of expression and data protection.
  • Responsible online marketplaces: Marketplaces must ensure that consumers can buy safe products or services online by strengthening controls to prove the reliability of the information provided by retailers (the “Know Your Business Customer” principle) and by taking action to prevent illegal content from appearing on their platforms, including through spot checks.
  • Better protection of cyberviolence victims: Non-consensual (revenge) sharing will result in immediate removal.
  • Stricter penalties: Online platforms and search engines can be fined up to 6% of their worldwide turnover. In the case of very large online platforms (with more than 45 million users), the European Commission will retain the exclusive power to require compliance.
  • Reduced burdens and more adaptation time for SMEs: Extended periods to apply the new rules will support innovation in the digital economy. The Commission will closely monitor the potential economic impact of the new obligations on small businesses.

Safer online space for users

  • New transparency obligations will require platforms to better inform users about how content is recommended to them (recommendation systems) and to choose at least one option that is not based on profiling.
  • Online advertising: Users will have better control over how their personal data is used. Targeted advertising will be prohibited when it comes to sensitive data (e.g., data based on sexual orientation, religion, or ethnicity).
  • Protection of minors: Platforms accessible to minors must take specific measures to protect them, including by fully prohibiting targeted advertising.
  • Bans on manipulating users’ choices through “dark patterns”: Online platforms and marketplaces cannot push people to use their services by, for example, promoting a particular choice or encouraging the recipient to change choices via disruptive pop-ups. Moreover, canceling a service should be as easy as subscribing to it.
  • Compensation: Recipients of digital services will have the right to seek compensation for damages or losses suffered as a result of platform infringements.

Harmful content and disinformation

Very large online platforms must comply with stricter obligations under the DSA, proportionate to the significant societal risks these platforms pose when spreading illegal and harmful content, including disinformation.

  • Very large online platforms will have to assess and mitigate systemic risks and be subject to independent audits every year. In addition, major platforms that use so-called “recommendation systems” (algorithms that determine what users see) must provide at least one option that is not based on profiling.
  • Specific measures in times of crisis: When a crisis occurs, such as a threat to public security or health, the European Commission may require very large platforms to limit any urgent threats on its platforms. These specific actions are limited to three months.

Is your organization ready? If not, you must start to focus on how you will define and apply Responsible AI.

Responsible AI principles

To prepare for a digital age in Europe, you should take care to ensure that your AI projects are:

Transparent & explainable

AI projects should be transparent and explainable to all stakeholders, including developers and end-users, at the highest level of explainability possible. The decisions and actions of an AI project should be communicated, including how broader systems leverage AI.

Fair & equitable

AI projects should be fair and equitable to all humans. There should be a focus on avoiding and minimizing unfair and unexpected biases while also ensuring accessibility and universal design throughout the AI project lifecycle.

Accountable

All AI projects need to be held accountable by humans throughout the AI project lifecycle. Human oversight needs to be applied with clear processes for auditability from end-users, regulators, and management within a company.

Respectful of privacy & data governance

AI projects should be respectful of privacy and include thorough data and AI Governance. Embedding security in the design and deployment of AI projects protects users against unintentional and malicious harm.

Robust & reliable

AI projects should perform reliably and be robust over time. Stability ensures that user trust is not compromised and that disparate impacts are avoided in AI project outcomes. Reliable AI projects should consider the social and environmental impacts of deployment and operation.

Kathrine Stampe Andersen

Kathrine Stampe Andersen

Vice Chair of the Board, 2021.AI

Kathrine is a Vice Chair of the Board at 2021.AI. She has 10+ years of experience within tech, telco, and media industry. Her previous roles include Senior VP in the telecom incumbent TDC Group, as well as Market VP in The Danish Chamber of Commerce.

You might also like…

Europe needs to take the global AI-leadership

The role of emerging technologies such as robotics, blockchain, artificial intelligence (AI), quantum computing, advanced manufacturing, and autonomous…

READ MORE

COVID-19 and why public datasets are so important

The EU rules on Artificial Intelligence – Five actions to consider now!

For a couple of years, the EU Commission has worked on rules, regulations, and incentives around Artificial Intelligence…

READ MORE