Biases in AI (III): Classification of discrimination
In the previous chapters of this series we have been able to analyze the concept of biases and other related concepts and the classification of biases in relation to AI, and it is appropriate to address in this installment the classification of discrimination in order to, from there, understand the risks and be able to deal with them.
1. Right to nondiscrimination
Non-discrimination is articulated as a basic principle of the Universal Declaration of Human Rights, adopted by the United Nations General Assembly in 1948. Within the framework of the United Nations, two important legal instruments adopted in 1966, the Covenant on Social, Economic and Cultural Rights and the International Covenant on Civil and Political Rights, article 26 of which establishes non-discrimination as an autonomous and general right, should also be highlighted.
There are other United Nations Conventions whose purpose is to prevent discrimination in various fields such as race, religion or belief, discrimination against people with disabilities, discrimination in the workplace or discrimination against persons on the basis of age.
There are countries at the international level that recognize other attributes as protected grounds, such as ethnic or social origin, pregnancy, genetic characteristics, language, belonging to a national minority, property and birth. These are the cases, among others, of the USA, Canada and Australia.
Article 19 of the Treaty on the Functioning of the European Union lists several protected grounds, including sex, racial or ethnic origin, religion or belief, disability, age, and sexual orientation. Based on these, various EU directives have been adopted that focus on ensuring equal treatment in all Member States.
Article 21 of the Charter of Fundamental Rights of the European Union prohibits discrimination on any grounds, including other grounds such as genetic characteristics, language, membership of a national minority, property and birth.
At the national level, some European countries, such as the Netherlands, have extended their lists of protected grounds to cover more areas than those covered by the Treaty.
In Spain, Article 14 of the 1978 Constitution proclaims the right to equality and non-discrimination, citing birth, race, sex, religion or opinion as particularly objectionable grounds, and prohibiting discrimination on the basis of any other personal or social circumstance.
Law 15/2022 of July 12, 2022, which entered into force on July 14, 2022, details more grounds for possible discrimination in its Article 2, which defines the subjective scope of application: birth, racial or ethnic origin, sex, religion, conviction or opinion, age, disability, sexual orientation or identity, gender expression, disease or health condition, serological status and/or genetic predisposition to suffer pathologies and disorders, language, socioeconomic status, or any other personal or social condition or circumstance.
In its objective scope (article 3.1.) it mentions the areas in which it is applicable, including in its letter “o” the “Artificial Intelligence and massive data management, as well as other areas of analogous significance”.
On the other hand, this law is made up of five titles:
- I. The first title establishes a series of definitions and includes the right to equal treatment and non-discrimination and subsequently deals with situations in different areas of “political, economic, cultural and social life”. The second chapter in this title regulates this rights in specific areas, including artificial intelligence and automated decision-making mechanisms.
- II. The second title establishes measures for the promotion of equal treatment and affirmative action measures.
- III. The third title is devoted exclusively to the creation and establishment of the Independent Authority for Equal Treatment and Non-Discrimination.
- IV. The fourth title establishes the infringements and sanctions in the field of equal treatment.
- V. The fifth and last title establishes a series of measures in relation to care, support and information for victims of discrimination and intolerance.
■ This Law is an important step within the Spanish legal framework since it emphasizes discrimination more clearly, and for the first time within the Spanish legal system, gives greater relevance to age discrimination. In addition, it not only proclaims rights, but also creates mechanisms aimed at protecting the victims of discrimination in its multiple dimensions.
2. Types of discrimination
In order to classify discrimination, we follow the classification made by the aforementioned Law 15/2022 in its Article 6:
- a. Direct discrimination (art. 6. 1º Organic Law 3/2007, of March 22nd and art. 6. 1º Law 15/2022): occurs when a person receives less favorable treatment due to any of the circumstances especially suspected of discrimination (race, gender, age, etc.). This will occur both when data on membership of a particularly discriminated group is entered into the AI system and a negative factor is associated with such membership, and if the algorithms and variables are designed to disadvantage these groups.
—If an AI recruitment system, for example, directly rejects candidates of a certain ethnicity, race or age without taking into account other characteristics, it may be possible to reject candidates of a certain ethnicity, race or age without taking into account other characteristics of the candidate. - b. Indirect discrimination: occurs when an apparently neutral provision, criterion or practice causes or is likely to cause one or more people a particular disadvantage with respect to other people.
—Consider, for example, an AI system for granting credit that uses an apparently neutral element, such as the postal code, but indirectly disadvantages people living in certain neighborhoods or areas where certain ethnic groups or groups live. - c. Discrimination by association: this occurs when a person or group, because of his or her relationship with another person with one of the causes of discrimination, is subjected to discriminatory treatment.
—Imagine, for instance, an AI system used for performance evaluation that penalizes employees who have a family member with a chronic illness. - d. Discrimination by mistake: is that which “is based on an incorrect assessment of the characteristics of the person or persons discriminated against” (art. 6. 2º b Law 15/2022). That is, because a protected characteristic is wrongly attributed to him/her and penalizes him/her.
—For example, a facial recognition AI system that denies access to a person because it wrongly associates him to a certain ethnicity to which it denies access.
To these cases of discrimination, we can add what we could call “aggravated” cases, which are as follows:
- i. Harassment: when discrimination is intended to create an intimidating, hostile, degrading, humiliating or offensive environment for the person or group that has such protected characteristic.
—Imagine a chatbot that, taking into account the gender of users, makes insulting comments to people of that gender, due to bias. - ii. Retaliation: when a person suffers retaliation for having filed a complaint or having participated in a process related to discrimination.
—Consider an AI system used for employee promotion that penalizes employees who have filed complaints or participated in such processes when they are promoted.
Throughout this chapter we have reviewed how the right to equal treatment and non-discrimination fits into the international, regional and local spheres, and then made a classification based on the provisions of Law 15/2022 in Article 6, which includes: direct, indirect, by association, by mistake, multiple or intersectional discrimination.
In short, the importance of identifying the types of discrimination that may arise as a result of bias is that it allows us to understand the possible risks and be able to deal with them.
■ This third chapter of this series of articles has completed a line that allows us to have a better understanding of the importance of identifying and classifying the biases that tend to be part of AI systems and thus the possible consequences for individuals and society in general.