Are you up to date with the new EU AI Regulation
As the summer is approaching, my mind wanders back 5 years ago, to April 2016, when the GDPR Regulation was originally adopted by the EU, and the adverse and profound impacts that this regulation had, not just to the EU, but ultimately to the global society.
When GDPR was enforced in 2018, it was a big change to both big and small companies and organizations around the world, because from then on data privacy, data protection and the right to be forgotten had to be respected for all doing business in the EU. This also led to the forming of new jobs titles (DPO’s, CISO’s) and new responsibilities within companies, and new rights emerged as citizens of any country. The GDPR legislation was truly many years of work for a lot of companies and organizations across the world, and ultimately paved the way for similar legislation in countries across the world, such as Chile, Japan, Brazil, South Korea, Argentina, Kenya, California and Canada.
Now we are back at it, this time not focused on data protection but AI protection. The new AI regulation released 21’st April from the EU defining the approach that will be taken from the EU on AI, and the similarities to the GDPR regulation are striking.
Another area which is interesting is, how much the new AI Legislation is leaning towards how the standardization communities are operating, essentially leaning towards possible future ISO/IEEE standards in this area.
Highlights of the new AI regulation
- Legislation is broad and is in core scope for companies, organizations within: Finance, insurance, Industrial, Life-science/Healthcare, Public Sector and others.
- Applies for all companies both outside and inside of the EU, providing that their AI Systems affect EU citizens.
- High Risk AI determined using Risk, Conformity Assessments and Definition of Harm, meaning that any company operating in the EU should build the proper procedures for handling this for all AI.
- Harm can be Physical Harm, Financial Harm, Systemic Political/Societal Harm or in violation of Human Rights.
- Any Purchaser or Provider of High Risk AI must take into account elements on Transparency, DataSets, Documentation/Record Keeping, Human Oversight, Robustness, Accuracy and Security when putting AI Systems into the markets.
- Any Provider of High Risk AI must ensure that systems and procedures are in place to ensure that such things as: Conformity Assessments, Tests, Post Market Surveillance, Technical Specifications, Communication with all relevant stakeholders and Traceable Documentation are being done and stored.
The legislation also proposes:
- Sandboxing Schemes to ensure flexibility and support for SMEs and Startup.
- EU Database for High Risk AI, and that providers report to this database.
- Creation of a EU AI Board handling ongoing policy changes, and ensuring that requirements are kept up to date.
Fine up to € 30 Million or up to 6% Worldwide annual turnover
Finally the legislation proposes that companies and organizations can be fined 30M EURO or up to 6 % of the total worldwide annual turnover, whichever is greater.
At 2021.AI we believe that this legislation will enable Europe and European organizations to position themselves as global leaders in adoption of best-practices, cutting-edge AI systems and applications which are used in a trustworthy and responsible way, this will offer Europe to lead the way with AI regulation, just as Europe did with the GDPR Regulation for Data.
About the authors
Rasmus is CTO at 2021.AI. Rasmus has an abundance of experience in positions like Program Manager, Lead Architect, and Senior Consultant for various international Financial, Energy, and Telecom customers. His skills include leadership, mentoring and enterprise architecture.
You might also like…
The Ethical AI Newsletter
It’s not fake. It’s not artificial. It’s real news! Sign up for our Ethical AI newsletter and get the latest AI insights from our data science and AI experts.