Responsible AI can help companies compete

Technology developers cannot predict the future, but they will have to look into the crystal ball of tomorrow to build technology that accounts for long-term social impact. In the Confederation of Danish Industry, we will help guide this anticipation. To do so, we are developing a labeling scheme for IT security and responsible data use in collaboration with The Danish Chamber of Commerce, SMEdenmark, and the Danish Consumer Council. The purpose is to make ethical AI the Danish position of strength.

Christian Hannibal | CEO of the Danish ICT and Electronics Federation | August | 2020

What is fairness?

The concepts of ”ethical” and “unethical” are complicated to define and even harder to measure. In addition, the unfair bias in algorithms is a well-known example of unethical AI. However, fairness is subjective and can differ from person to person and change in relation to the situation. For instance, implementing AI for decision-making in the healthcare sector is an example of how an AI application can treat genders differently. Some diagnoses are more likely to occur for women than men and vice versa, and this type of decision-making can be fair and ethical. Whereas favoring persons during the hiring process based on gender is discriminating and unethical.

Nevertheless, humans are not the golden standard for decision making. We all contain bias (conscious and unconscious). But we need to have higher expectations for AI, especially in human decision-making.

Another dimension worth looking into is the level of impact. We need to distinguish between levels of impact and risk. The earlier mentioned example of AI in healthcare can contain both high and low risk. AI in diagnosis and decision support has a high impact on human lives. Though implementing AI in employee planning for healthcare professionals will have little effect on human beings, it’s important to distinguish between levels of risk and not distinguish between ,e.g., sectors.

The Danish labeling scheme for IT security and responsible data use

Why is the Danish labeling scheme for IT security and responsible data use a step forward in reaching ethical AI? We believe that the key to ethical AI is to make trustworthy AI, and to do so, we need to make clear and measurable guidelines for AI developers and buyers.

The labeling scheme will address ethical questions by offering companies a clear framework for ethical AI. The aim is to create trustworthy algorithms that are human-centric. In these frameworks, the providers and developers are taking responsibility to secure transparency and explainability (XAI) as well as high model and data quality.

Human-centric means that data scientists and other AI developers should make a human impact assessment involving all relevant stakeholders. Transparency is the core of how AI applications should be developed. We need to avoid black boxes. AI developers must be able to explain how the AI application is developed, which models and data have been used, and the reasons behind using them. So, what’s the purpose? There is no need to implement AI just for the sake of it.

The Danish way

The hope is to make trustworthy AI the Danish position of strength. Danish citizens are known to have high trust in their authorities and country. Thus, it is only natural that we develop the key standard for trustworthy and ethical AI. We should foster the perception and share our experiences and learnings with the rest of Europe. AI applications can make an incredible impact and result, but only if we take responsibility and make ethical standards.

In the Confederation of Danish Industry, we believe that a labeling scheme for IT security and responsible data use is an excellent start on the ethical AI journey and a solution to how Denmark, and Europe, can compete with other markets. Using AI for good is not a burden. It is a way of making us competitive commercially as well as ethically.

About the author

Christian Hannibal

Christian Hannibal

CEO of the Danish ICT and Electronics Federation

Christian Hannibal has worked in the cross-section between technology and policy for more than 13 years. First, as a civil servant in the ministry of science, technology, and innovation and for the past years in various roles in the Confederation of Danish Industry.

You might also like…

The Ethical AI Newsletter

It’s not fake. It’s not artificial. It’s real news! Sign up for our Ethical AI newsletter and get the latest AI insights from our data science and AI experts.

Book a demo