Understanding the EU’s AI Act: A New Era of Regulation
As of Sunday, a significant shift has occurred within the European Union regarding the regulation of artificial intelligence (AI). The EU’s regulators now possess the authority to ban AI systems that they determine pose an “unacceptable risk” or potential harm. This development marks a pivotal moment in the ongoing evolution of AI governance within the bloc.
The first compliance deadline for the EU’s AI Act is set for February 2. This comprehensive regulatory framework, which the European Parliament approved last March after years of meticulous development, officially came into force on August 1. The current phase marks the initial compliance deadline, setting the stage for a new era of AI regulation.
Scope and Risk Levels of the AI Act
The AI Act is designed to encompass a wide array of use cases where AI might interact with individuals, ranging from consumer applications to physical environments. The specifics of the Act are detailed in Article 5, which outlines the framework’s broad coverage.
The EU’s approach categorizes AI systems into four broad risk levels:
- Minimal risk: Applications such as email spam filters fall under this category and will face no regulatory oversight.
- Limited risk: This includes systems like customer service chatbots, which will be subject to light-touch regulatory oversight.
- High risk: AI systems used for healthcare recommendations, for example, will face stringent regulatory oversight.
- Unacceptable risk: Applications deemed to pose an unacceptable risk are the focus of the current compliance requirements and will be prohibited entirely.
Prohibited AI Activities
The AI Act identifies several activities that are considered unacceptable and are therefore prohibited. These include:
- AI systems used for social scoring, such as building risk profiles based on a person’s behavior.
- AI that manipulates a person’s decisions subliminally or deceptively.
- AI that exploits vulnerabilities like age, disability, or socioeconomic status.
- AI that attempts to predict criminal behavior based on appearance.
- AI using biometrics to infer personal characteristics, such as sexual orientation.
- AI collecting “real-time” biometric data in public places for law enforcement purposes.
- AI inferring emotions in workplaces or schools.
- AI creating or expanding facial recognition databases by scraping images online or from security cameras.
Companies found using any of these AI applications within the EU will face significant fines, regardless of their headquarters’ location. The penalties could reach up to €35 million (~$36 million) or 7% of their annual revenue from the previous fiscal year, whichever is greater.
Compliance and Enforcement Timeline
While organizations are expected to be fully compliant by February 2, the enforcement of fines and other provisions will not commence immediately. Rob Sumroy, head of technology at the British law firm Slaughter and May, highlighted in an interview with TechCrunch that the next significant deadline for companies is in August. By then, the competent authorities will be identified, and the enforcement provisions will take effect.
Preliminary Pledges and Industry Response
The February 2 deadline serves as a formality in some respects. Last September, over 100 companies signed the EU AI Pact, a voluntary commitment to begin applying the principles of the AI Act ahead of its official implementation. Signatories, including Amazon, Google, and OpenAI, pledged to identify AI systems likely to be categorized as high risk under the Act.
Notably, some tech giants, such as Meta and Apple, did not sign the Pact. French AI startup Mistral, a vocal critic of the AI Act, also opted out. However, this does not imply that these companies will fail to meet their obligations, including the ban on unacceptably risky systems. Sumroy noted that most companies are unlikely to engage in the prohibited practices outlined in the Act.
For organizations, a primary concern regarding the EU AI Act is whether clear guidelines, standards, and codes of conduct will be available in time to provide clarity on compliance. Sumroy emphasized that the working groups are currently meeting their deadlines on the code of conduct for developers.
Possible Exemptions and Future Guidelines
The AI Act does allow for certain exceptions to its prohibitions. For instance, law enforcement agencies may use systems that collect biometrics in public places if these systems aid in performing a “targeted search” for an abduction victim or help prevent a “specific, substantial, and imminent” threat to life. Such use requires authorization from the appropriate governing body, and the Act stipulates that law enforcement cannot make decisions that produce adverse legal effects on individuals solely based on these systems’ outputs.
Additionally, the Act provides exceptions for systems that infer emotions in workplaces and schools where there is a “medical or safety” justification, such as systems designed for therapeutic use.
The European Commission, the EU’s executive branch, announced plans to release additional guidelines in “early 2025” following a consultation with stakeholders in November. However, these guidelines have yet to be published.
Sumroy also pointed out the uncertainty regarding how other existing laws might interact with the AI Act’s prohibitions and related provisions. Clarity on these interactions may not emerge until later in the year as the enforcement window approaches.
It is crucial for organizations to remember that AI regulation does not exist in isolation. Other legal frameworks, such as GDPR, NIS2, and DORA, will interact with the AI Act, potentially creating challenges, particularly around overlapping incident notification requirements. Understanding how these laws fit together will be as important as understanding the AI Act itself.
Originally Written by: Kyle Wiggers