Back to AI Timeline
2023Regulation

EU AI Act Passed

The European Union reached a political agreement on the AI Act, the world's first comprehensive legal framework for regulating artificial intelligence. The Act classifies AI systems by risk level and imposes strict requirements on high-risk applications in areas like biometrics, critical infrastructure, and law enforcement. It set a global precedent for AI governance.

In December 2023, the European Union reached a political agreement on the AI Act, making it the world's first comprehensive legal framework specifically designed to regulate artificial intelligence. After years of negotiation -- the original proposal was introduced by the European Commission in April 2021 -- the legislation represented a landmark attempt to balance innovation with the protection of fundamental rights.

The Risk-Based Framework

The AI Act's central innovation was its risk-based approach to regulation. AI systems were classified into four risk categories. Unacceptable risk applications -- such as social scoring by governments or real-time biometric surveillance in public spaces -- were banned outright. High-risk applications -- including AI used in hiring, education, law enforcement, and critical infrastructure -- faced strict requirements for transparency, accuracy, human oversight, and documentation. Limited-risk systems like chatbots required transparency disclosures. Minimal-risk applications, which comprised the vast majority of AI systems, faced no specific requirements.

The General-Purpose AI Provisions

The rapid rise of ChatGPT and other foundation models during the negotiation period forced legislators to add provisions for general-purpose AI (GPAI) models. These provisions required developers of GPAI models to provide technical documentation, comply with EU copyright law, and publish training data summaries. Models deemed to pose "systemic risk" -- generally those trained with very large amounts of compute -- faced additional obligations including adversarial testing, incident reporting, and cybersecurity measures.

The Negotiation Process

The legislation went through extensive negotiations between the European Parliament, the Council of the European Union, and the European Commission. Key sticking points included the scope of biometric surveillance bans (France pushed for law enforcement exemptions), the definition of high-risk AI systems, and the obligations placed on foundation model developers. The final text reflected compromises on all these issues.

Industry Response

Industry reactions were mixed. Some companies welcomed the regulatory clarity, arguing that clear rules would enable confident investment and help European AI companies demonstrate trustworthiness. Others warned that excessive regulation would drive AI development and investment to less regulated markets like the United States and China. The compliance costs, particularly for smaller companies, were a significant concern.

Global Influence

The AI Act was designed to have extraterritorial reach -- it applied to any AI system used in the EU, regardless of where it was developed. This "Brussels effect" meant that global companies would need to comply, potentially raising standards worldwide. Other jurisdictions watched closely; Brazil, Canada, and several Asian countries began developing their own AI regulations, often drawing on the EU's framework.

Enforcement and Penalties

The Act established the European AI Office to oversee enforcement and provided for substantial penalties. Violations could result in fines of up to 35 million euros or 7 percent of global annual turnover, whichever was higher -- similar in scale to GDPR penalties. National authorities were designated as the primary enforcement bodies, with the AI Office coordinating at the EU level.

The Broader Context

The AI Act represented a philosophical statement about the role of regulation in shaping technological development. The EU explicitly chose to prioritize human rights and safety, even at the potential cost of innovation speed. This stood in contrast to the United States' lighter-touch approach and China's state-directed model. Whether the EU's approach would prove wise or counterproductive became one of the central debates in technology policy.

Key Figures

Thierry BretonMargrethe VestagerBrando BenifeiDragos Tudorache

Lasting Impact

The EU AI Act established the world's first comprehensive legal framework for AI regulation, setting a global precedent for how governments can govern AI development and deployment. Its risk-based approach influenced regulatory discussions worldwide.

Related Events

2026Regulation
AI Regulation Enforcement Begins

Major provisions of the EU AI Act came into force, with regulators beginning active enforcement including audits, compliance checks, and fines for violations. Other countries followed with their own frameworks, creating a patchwork of global AI regulation that companies had to navigate. The era of self-regulation in AI effectively came to an end.