Technology
What do the groundbreaking AI regulations in Europe entail?
Highlights
- EU policymakers and lawmakers achieve a historic milestone with the world’s first comprehensive AI regulations, setting a new standard for global AI governance.
- Stricter rules are established for high-risk AI systems, requiring fundamental rights assessments and market access obligations to mitigate potential health, safety, and rights impacts.
- The use of real-time biometric systems in law enforcement is permitted for specific cases like crime victims and terrorism suspects, demonstrating a balanced approach between security needs and privacy considerations.
- General Purpose AI and foundation models are subjected to transparency requirements, ensuring accountability and responsible AI development.
- Explicit bans are implemented for prohibited AI applications, including sensitive biometric categorization, untargeted scraping for facial recognition, emotion recognition in workplaces, and social scoring based on personal traits.
- Enforced sanctions, with fines varying by company size, underscore the EU’s commitment to ensuring compliance and accountability in the development and deployment of AI technologies.
In a historic development, European Union policymakers and lawmakers have successfully reached a deal on the world’s first comprehensive set of regulations governing the use of artificial intelligence (AI). This groundbreaking agreement, set to go into force early next year and apply in 2026, positions the EU at the forefront of global AI governance.
Here are the key highlights of the landmark AI regulations:
High-Risk Systems
AI systems categorized as high-risk, with significant potential to impact health, safety, fundamental rights, environment, democracy, elections, and the rule of law, must comply with stringent requirements. This includes undergoing a fundamental rights impact assessment and fulfilling obligations to access the EU market. AI systems posing limited risks will face lighter transparency obligations, such as disclosure labels indicating AI-generated content.
Use of AI in Law Enforcement
Real-time remote biometric identification systems in public spaces by law enforcement are permitted for specific purposes, including identifying victims of crimes like kidnapping, human trafficking, and sexual exploitation. Additionally, these systems can be used to prevent specific and present terrorist threats and track down individuals suspected of various serious crimes.
General Purpose AI Systems (GPAI) and Foundation Models
Transparency requirements are imposed on GPAI and foundation models, necessitating technical documentation, compliance with EU copyright law, and detailed summaries of algorithm training content. High-impact GPAI and foundation models with systemic risks face additional scrutiny, including model evaluations, risk assessments, adversarial testing, and reporting to the European Commission on serious incidents.
Prohibited AI Applications
The regulations explicitly prohibit several AI applications, such as biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for facial recognition databases, emotion recognition in workplaces and educational institutions, and social scoring based on personal characteristics. Furthermore, AI systems manipulating human behaviour to circumvent free will and exploiting vulnerabilities are strictly prohibited.
Sanctions for Violations
Fines for violations vary based on the size of the company involved, starting from €7.5 million or 1.5% of the global annual turnover. The fines can escalate to up to €35 million or 7% of the global turnover.
This comprehensive set of regulations reflects the EU’s commitment to responsible and ethical AI practices, setting a precedent for the global community. As companies gear up for compliance, the EU’s groundbreaking approach could shape the future of AI governance on the international stage.