Insights EU AI Act: European Parliament and Council reach provisional agreement

Contact

The AI Act was proposed by the European Commission in April 2021 (“the Proposal”).  On 8 December 2023, after much debate, the EU Parliament and Council reached a provisional agreement on the Act (“the Compromise”). The text is not yet available but some of the reported changes are described below.

Definition of AI
The Proposal defined an AI System as software that is developed with one or more of the techniques and approaches listed in the Proposal (including machine learning, logic and knowledge-based approaches and statistical approaches) which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The Compromise adopts a definition closer to that adopted by the OECD. The current OEDC definition is: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

Prohibited AI
The Proposal prohibited certain types of AI such as AI using subliminal techniques, or that exploits vulnerabilities (e.g. due to age or disability), to materially distort a person’s behaviour in a way that causes or is likely to cause harm, social scoring by public authorities leading to detrimental or unfavourable treatment, and real-time remote biometric identification (“RBI”) systems in publicly accessible places for law enforcement purposes (with the latter being subject to certain exceptions, e.g. to search for victims of crime and to prevent imminent threats, with prior judicial authorisation). The Compromise will also prohibit untargeted scraping of facial images from the internet or CCTV to create facial recognition databases and predictive policing software. It also adds additional safeguards and narrows the exceptions to the use by law enforcement of real-time RBI systems in public spaces.

High-Risk AI
Where they are not prohibited, the Proposal set out obligations on providers and deployers of AI. If the AI is high risk (determined by reference to a lengthy list of AI applications set out in the Act itself, such as real-time or post RBI systems (a post RBI system is defined as one that is not a real-time RBI system), safety components in critical infrastructure, AI used in recruitment and education etc), the Proposal included risk management requirements, requirements around the data used to train, test or validate the AI, ensuring the AI is capable of record keeping and requirements relating to transparency, human oversight and cybersecurity.

According to reports, the Compromise introduces a requirement for a fundamental rights impact assessment for high-risk AI deployed by public sector bodies as well as banks and insurance companies. New types of high-risk AI have also been agreed, including AI used to influence elections, but Parliament’s proposal to include recommender systems of large social media platforms does not appear to have been accepted. Reports also suggest that post RBI will be prohibited subject to some exceptions for law enforcement use.

Certain types of AI will fall outside the requirements even when used in high-risk situations e.g. where it performs a narrow procedural or preparatory low risk task.

Non-High-Risk AI
For non-high-risk AI, the Proposal included an obligation to inform users that they are interacting with AI. Similar obligations were proposed for deployers of emotion recognition systems (which identifies or infers emotions or intentions based on biometric data) and biometric categorisation systems (assigning natural persons to specific categories, such as sex, age, hair or eye colour, ethnic origin or sexual or political orientation, based on biometric data). The Compromise prohibits emotion recognition in the workplace and educational institutions and biometric categorisation to infer sensitive data (e.g. sexual orientation and religious beliefs).

The Proposal did not impose any transparency obligations in respect of AI-generated content other than “deep fakes”, where it proposed an obligation to disclose that the content had been artificially generated or manipulated. According to reports, the co-legislators have addressed concerns raised about how this provision would apply to computer-generated imagery (e.g. in a video game), but the detail of the agreed compromise is subject to confirmation.

General Purpose AI and Foundation Models
The Compromise adds new provisions on general purpose AI (systems that can be used for many different purposes), foundation models (large systems capable of performing a wide range of distinctive tasks e.g. generating video, audio, text, images and code) and “high impact” foundation models (to be defined by the computing power used for its training). These would be subject to obligations relating to transparency, preparation of technical documentation, sharing summaries of training data and processes to ensure respect for copyright law (e.g. respecting rightsholders’ opt-outs from the text and data mining exceptions under EU copyright law), with high-risk models being subject to further obligations relating to model evaluations, risk assessments and mitigation, testing, reporting, cybersecurity and energy efficiency.

Fines
The Compromise contains a sliding scale of fines, with the highest fines applying to breach of the rules on prohibited AI; a maximum of €35m or 7% of worldwide annual turnover for the preceding year, whichever is higher (the Proposal set the maximum at €30m or 6%).

Next Steps
Once the wording of the text is finalised, the Compromise will have to be endorsed by Member States’ representatives and then will need to be formally adopted by the Parliament and Council. The Act will apply two years from the date it enters into force save that the Compromise provides that some provisions (e.g. prohibited AI) should come into force earlier. We will need to wait for the final text to assess the full practical impact of this and other aspects of the Compromise.

For more information, click here, here and here.