AI practices forbidden in the Artificial Intelligence Regulation (AIR)
Despite the countless benefits that Artificial Intelligence (AI) can bring to many sectors of society, it can also be used inappropriately or for malicious purposes. It was therefore considered essential to establish a specific regulation prohibiting those practices or uses of AI systems that are particularly harmful and contrary to the fundamental values of the European Union (EU).
In this article, we will detail the AI practices prohibited under the Artificial Intelligence Regulation (AIR), as we mentioned in a cursory manner in our first analysis of the AIR entitled "An Introduction to the Artificial Intelligence Regulation (AIR)".
European Artificial Intelligence Regulation (AIR) aims to provide a regulatory framework for addressing the misuse and malicious use of AI.
The decision to establish prohibitions against certain AI practices is also based on the need to adopt an approach that is commensurate with the risks associated with AI systems. The objective, therefore, is to provide and ensure a regulatory framework that accurately addresses potential threats.
Specifically, Article 5 of the RIA, included in Chapter II, lists the prohibited AI practices, which include the following.
1. AI systems that use subliminal or deceptive techniques to substantially alter a person's behavior (Article 5.1.a)
These systems are employed for the purpose of substantially altering the behavior of a person or group of people, appreciably impairing their ability to make an informed decision, and causing them to make a decision they would not otherwise have made, in a way that causes, or is reasonably likely to cause, substantial harm to that person, another person, or a group of people. These stimuli may be auditory or visual stimuli, or other manipulative techniques that are imperceptible to the person.
In this sense, it is relevant to mention Directive 2005/29/EC, of May 11, 2005, on unfair business-to-consumer commercial practices, which prohibits unfair commercial practices that may cause economic damage to consumers, understood as:
- Those that are contrary to the requirements of professional diligence.
- Those that distort or may substantially distort the economic behavior of the consumer or group to which the practice is directed.
- Business practices relating to misleading acts or omissions.
- Aggressive commercial practices that substantially impair or are likely to substantially impair the consumer's freedom of choice or conduct, produced by harassment, coercion, and undue influence.
✅ All of the above practices should not affect lawful practices in the context of medical treatment, such as psychological treatment of mental illness or physical rehabilitation.
2. AI systems that by exploiting a vulnerability of a natural person or group of people seek or cause substantial alteration of the behavior of that person (Article 5.1.b)
Regarding vulnerabilities of a person or collective of people, age, disability, or a specific social or economic situation, the exploitation of which seeks to or does substantially alter that person's behavior in a way that causes, or is reasonably likely to cause, substantial harm to that person or to another person.
3. Social scoring systems (Article 5.1.c)
Systems whose purpose is to assess or rank the trustworthiness of a person or groups of people on the basis of their known, inferred or predicted social behavior or known, inferred or predicted personality characteristics, which result in any of the following, shall be prohibited:
- Detrimental or unfavorable treatment in social contexts unrelated to the context where the data were originally generated or collected.
- Detrimental or unfavorable treatment that is unjustified or disproportionate to the seriousness of your social behavior.
4. Biometric categorization systems based on biometric data of natural individuals for the purpose of inferring or deducing special categories of data (Article 5.1.g)
These systems are intended to group individuals into specific categories based on their biometric data, in order to deduce or infer special categories of data according to Article 4, paragraph 14 of the General Data Protection Regulation (GDPR).
Exceptions from the prohibition of this type of AI systems are the labeling or filtering of lawfully acquired biometric datasets based on biometric data or the categorization of biometric data in the field of law enforcement.
The RIA itself recalls the application of Article 9 RGPD regarding the processing of biometric data for purposes other than law enforcement. Let us recall that Article 9 GDPR provides for a general prohibition of processing of special categories of data, although such prohibition may be lifted in certain circumstances when certain specific criteria are met.
5. Real-time remote biometric identification systems in public access areas (Article 5.1.h)
These systems are invasive to the rights and freedoms of individuals and cause the feeling of being under constant surveillance. In addition, they could lead to technical inaccuracies resulting in biased results.
However, there are several exceptions to this prohibition:
- When a targeted search is being conducted for specific potential victims of a specific crime (kidnapping, human trafficking, or sexual exploitation of human beings), including missing persons.
- In order to prevent specific, significant and imminent threats to the life or safety of persons or a real and present or real and foreseeable threat of a terrorist attack.
- For the detection, location, identification, or prosecution of a person who has committed or is suspected of having committed any of the offenses listed in Annex II (such as terrorism, trafficking in human beings, kidnapping, illegal detention, or hostage-taking).
These systems must also comply with necessary and proportionate safeguards and conditions relating to use, in particular regarding temporal, geographical and personal limitations, and their use is subject to prior authorization. Among these requirements, it is indicated that it will only be authorized if the police authority has completed a fundamental rights impact assessment.
◾ Regarding this prohibition, part of the scientific community has questioned the breadth of the exceptions to the point that they can become rules, constituting a high-risk system rather than a prohibition.
6. Risk assessment and predictive crime profiling systems (Article 5.1.d)
AI systems for conducting risk assessments of natural persons for the purpose of assessing or predicting the risk of a natural person committing a crime based solely on the profiling of a natural person or the assessment of personality traits and characteristics are prohibited.
This does not apply, however, in cases where such systems are used to support the human assessment of a person's involvement in a criminal activity that is already based on objective and verifiable facts directly related to a criminal activity.
✅ An example of such AI systems, similar to the concept of social credit systems, can be found in China, where such systems are being implemented with the aim of predicting the future commission of crimes.
7. AI systems that create or extend facial recognition databases (Article 5.1.e)
AI systems that specifically aim to create or extend facial recognition databases by non-selectively extracting facial images from the internet or CCTV are prohibited.
The need to regulate such systems is based, as mentioned above, on the fear of mass surveillance that could be perceived by individuals, which could result in serious violations of fundamental rights.
8. AI systems by means of which emotions can be inferred (Article 5.1. f)
The use of AI systems whose purpose is to infer the emotions of a natural person is prohibited in workplaces and educational institutions. Therefore, we must conclude that all other emotion recognition systems would be categorized as high-risk systems.
These systems have the potential to be highly invasive of the rights and freedoms of the individuals concerned, which could result in detrimental treatment of certain individuals or groups.
However, AI systems for medical or security purposes are exempted from this prohibition.
It should finally be noted that infringement of any of these prohibitions will be punishable by administrative fines of up to euros 35,000,000 or, if the infringer is a company, up to 7% of its total worldwide turnover for the previous financial year, whichever is higher.
Having analyzed the AI practices prohibited by the RIA, when the text becomes applicable, we will look at their practical application and their effectiveness in ensuring respect for the fundamental rights and values of the European Union.
This will require constant vigilance and continuous evaluation of AI policies and regulations, as we will explain in future blog posts.
◾ MORE OF THIS SERIES
Image by rawpixel.com / Freepik.