Insights EU AI ACT: Commission publishes Guidelines on prohibited AI practices

The European Commission has published Guidelines on prohibited AI practices, as defined by the EU AI Act.

Article 5 of the Act prohibits the “placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values”. The Guidelines unpack in considerable detail what this means in practice, including addressing to whom the prohibition applies, any relevant exclusions, and concrete examples of the sorts of systems that would fall foul of the Act. Like the recently-published Guidelines on the Definition of AI Systems (which we commented upon here), these Guidelines are not binding, although the Commission states that they offer valuable insights into its interpretation of the prohibitions.

The eight prohibitions that are identified in Article 5 of the Act are each considered at length in the Guidelines, and are as follows:

  1. Harmful manipulation and deception

This prohibition covers AI systems that “deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or with the effect of distorting behaviour, causing or reasonably likely to cause significant harm”.

Examples provided in the Guidelines of such subliminal techniques include visual and auditory subliminal messages that the conscious mind is unable to register but which nonetheless influence attitudes or behaviour, and embedded images that are hidden so as not to be consciously perceived, but “may still be processed by the brain and influence behaviour”.

On the matter of employing manipulative or deceptive technique with the objective or effect of distorting behaviour, the Guidelines provide examples of AI systems that could be prohibited, including chatbots that impersonate others, or which “exploit individual vulnerabilities to adopt unhealthy habits or engage in dangerous activities”.

  1. Harmful exploitation of vulnerabilities

AI systems must not “exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm”.

Examples that the Guidelines provide that would be caught by this prohibition include an AI-powered toy designed to interact with children and encourage them to complete increasingly risky challenges, or a game that uses AI to analyse children’s behaviour which creates “personalised and unpredictable rewards through addictive reinforcement schedules and dopamine-like loops to encourage excessive play and compulsive usage”. Similarly, AI systems may be used to target older people with deceptive personalised offers or scams, or chatbots ostensibly designed to provide mental health support might exploit the vulnerability of its users. The Guidelines also provide an example of an AI-predictive algorithm that is used to target people in low-income postcodes for predatory financial products.

The Guidelines are clear that, as regard these and other examples, the legislation is intended to target only exploitative practices that are likely to cause serious harm to people, and that it is crucial to distinguish between manipulation on the one hand and lawful persuasion on the other. Where an AI system falls on that line will be influenced by matters such as the degree of transparency, whether the system is designed to undermine autonomy, compliance with other legal and regulatory frameworks, and the consent of the user.

  1. Social Scoring

The Guidelines recognise that AI-enabled scoring can bring benefits to society. However, certain ‘social scoring’ practices that “assess or classify individuals or groups based on their social behaviour or personal characteristics and lead to detrimental or unfavourable treatment” are prohibited. This is particularly so where data comes from multiple unrelated contexts or the treatment is disproportionate to the gravity of the social behaviour. Examples are provided of such systems including determining the creditworthiness or eligibility for state support of individuals based on variables that have no apparent connection to the purpose of the valuation.

  1. Individual criminal offence risk assessment and prediction

AI systems must not be used to “make risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics”. For example, a law enforcement agency using an AI system to predict criminal behaviour based solely on individuals’ age, nationality, address, type of car, and marital status “may be assumed to be prohibited” under the Act.

  1. Untargeted scraping to develop facial recognition databases

AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footages are prohibited under the Act. However, the untargeted scraping of biometric data other than facial images (such as voice samples) is not prohibited, nor are facial image databases that are not used for recognition purposes (for example image databases used for AI model training where people are not identified).

  1. Emotion recognition

The Act prohibits AI systems “to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the system is intended for medical or safety reasons”. For example, an AI system that infers that an employee is unhappy or angry with a customer by analysing body gestures or facial expressions would be prohibited. Similarly, using webcams and voice recognition systems in a call centre to track employees’ emotions is prohibited, unless solely deployed for training purposes and not shared with HR responsible persons so as to affect the assessment or promotion of the relevant employees.

  1. Biometric categorisation

AI systems that employ biometric categorisation systems that “categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation” will largely be prohibited. The Guidelines provide an example of an AI system that “categorises persons active on a social media platform according to their assumed sexual orientation by analysing the biometric data from photos shared on that platform and on that basis serves those persons advertisements”.

  1. Real-time remote biometric identification

Finally, the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited unless it is strictly necessary to achieve certain narrow objectives, including searching for specific victims of abduction or missing persons, or preventing a “genuine and present or genuine and foreseeable threat of a terrorist attack”. The Guidelines set out detailed advice on how and when such systems may be used and what those intending on employing such systems should consider before deciding to do so, including: “the nature of the situation giving rise to the possible use, in particular, the seriousness, probability and scale of the harm for natural persons, society and law enforcement purposes that would be caused if the system were not used, should be assessed against the consequences of the use of the system on the rights and freedoms of the persons concerned, in particular, the seriousness, probability and scale of those consequences”. Law enforcement authorities wishing to deploy these systems must also in advance have conducted a Fundamental Right Impact Assessment and registered the system in the EU.

To read more, click here.