Insights EU AI ACT: Commission publishes Guidelines on definition of AI systems

Contact

The European Commission has published Guidelines on the definition of an AI system.

Under the EU AI Act, only those systems that fall under the definition of ‘AI system’ in Article 3(1) will fall within its scope. As the Guidelines explain, that definition comprises seven main elements: (1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs (6) such as predictions, content, recommendations, or decisions (7) that can influence physical or virtual environments.

The Guidelines expand upon each of these seven elements with the aim of providing non-binding advice to assist organisations determine whether they fall within the scope of the Act.

  1. Machine-based System

The Guidelines explain that “all AI systems are machine-based, since they require machines to enable their functioning, such as model training, data processing, predictive modelling, and large-scale automated decision making”. It also confirms that ‘machine-based’ covers a wide variety of computational systems, including quantum computing systems.

  1. Autonomy

According to the Guidelines, all systems that are “designed with some reasonable degree of independence of action fulfil the condition of autonomy in the definition of an AI system”. Systems that are designed to operate “solely with full manual human involvement and intervention” (for example through manual controls or through automated systems-based controls which allow humans to delegate or supervise systems operations) will not fall within the scope of the Act. However, if a system is designed to be able to generate an output that has not been “manually controlled, or explicitly and exactly specified by a human”, it will exercise “some degree of independence of action” and therefore be caught.

  1. Adaptiveness

Adaptiveness is a closely-related but distinct concept from autonomy. The Guidelines state that adaptiveness refers to “self-learning capabilities, allowing the behaviour of the system to change while in use”. They also make clear that the definition in the Act refers to systems that “may” exhibit adaptiveness, meaning that a system does not necessarily have to possess adaptiveness or self-learning capabilities after deployment to fall within the scope of the Act.

  1. AI System Objective

Recital 12 of the EU AI Act states that “the objectives of the AI system may be different from the intended purpose of the AI system in a specific context”. The Guidelines explain that objectives are “internal to the system, referring to the goals of the task to be performed and their results”, whereas an intended purpose is “externally oriented and includes the context in which the system is designed to be deployed and how it must be operated”. The Guidelines also state that objectives can be ‘explicit’ insofar as they are clearly stated goals which are directly encoded by the developer into the system, or ‘implicit’ and “deduced from the behaviour or underlying assumptions of the system”.

  1. Inferencing how to generate outputs

The Guidelines state that the capability to infer is “a key, indispensable condition that distinguishes AI systems from other types of systems”. According to Recital 12 of the Act, the capability to infer “refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data”. The Guidelines go into detail to explain the different techniques that enable AI systems to infer how to generate outputs, including (1) machine learning approaches that employ supervised, unsupervised, self-supervised, and reinforcement learning; and (2) deep learning.

The Guidelines also refer to systems that have the capacity to infer but nonetheless fall outside the scope of the definition of an AI system in the Act “because of their limited capacity to analyse patterns and adjust autonomously their output”. This includes systems for improving mathematical optimization, single prediction systems, systems based on classical heuristics, and basic data processing.

  1. Outputs that can influence physical or virtual environments

According to the Guidelines, AI systems differ from non-AI systems in their ability to generate types of outputs that leverage patterns learned during training or use expert-defined rules which, in turn, can influence physical or virtual environments. The four categories of such outputs are: (1) predictions; (2) content generation; (3) recommendations; and (4) decisions.

  1. Interaction with the environment

Finally, the Guidelines state that this element of the definition “should be understood to emphasise the fact that AI systems are not passive, but actively impact the environments in which they are deployed”.

According to the Commission, the Guidelines will “evolve over time and will be updated as necessary, in particular in light of practical experiences, new questions and use cases that arise”.

To read more, click here.