Bias in Artificial Intelligence (I): The necessary distinction between biases and related concepts
Artificial Intelligence (AI) brings with it many opportunities and also, as is normal, risks that must be managed, and one of them has to do with biases and the possible consequences they can have, such as discrimination. In the following series of chapters, we are going to address various issues around bias, but we think it is essential to start by clarifying concepts. We must not only understand the concept of bias but also distinguish it from other “similar” concepts and relate them to AI systems. 1. The algorithm as part of an AI system According to the RAE (Royal Spanish Academy), an algorithm is an “ordered and finite set of operations that allows the solution of a problem to be found”. We could say that it is like a recipe, but unlike a recipe, which is applied for a specific purpose (cooking), the algorithm can be used for many different purposes. That is to say: they are precise instructions that, based on inputs and through a general process, generate consistent results (outputs). In computing, it is common to find examples of algorithms, such as the binary search used by search engines, the PageRank designed by Google to determine the position of a web page based on the quantity and quality of the links that direct to it, or those of AI systems. Regarding the definition of AI, although there were attempts, prior to the publication of the Regulation on Artificial Intelligence (RIA), to establish a specific definition for this concept, the following definition has finally been adopted in Article 3.1 of the RIA: “A machine-based system designed to operate with different levels of autonomy, which can show the ability to adapt after deployment and which, for explicit or implicit objectives, infers from the input information it receives the way to generate output information, such as predictions, content, recommendations or decisions, which can influence physical or virtual environments”. For its part, the European Commission has recently published guidelines on the definition of an IA system, explaining the practical application of the legal concepts contained in the RIA. A third concept, related to but distinct from the AI system, is that of an AI model. In the ISO/IEC 22989 standards (AI concepts and terminology) a model is defined as the "Physical, mathematical or logical representation of a system, entity, phenomenon, process or data”. For its part, the Organization for Economic Co-operation and Development (OECD), in its Artificial Intelligence Paper No. 8, describes AI models as "A central component of an AI system used to make inferences from inputs to produce outputs [that] include, among others, statistical models and various types of input-output functions (such as decision trees and neural networks)". Therefore, and in summary: All AI systems use algorithms, which are part of an AI system. However, an AI system has other elements: hardware, data, etc. 2. Concept of biases Now we are going to focus on biases, which can and often are part of an AI system and can also be generated by the algorithm itself. To do this, we must now address the concept of biases, as well as other related but distinct concepts. The Royal Spanish Academy defines bias as related to 'tendentious' information and this in turn as "manifesting partiality, obeying a certain trend or idea”. For its part, and as Carlos B. Fernández indicates in the article Tools to eliminate or reduce bias in automated decision-making systems, the International Organization for Standardization (ISO) defines bias as "the degree to which a reference value deviates from the truth". "In this context, an AI system is said to be biased when it exhibits systematically inaccurate behavior". In turn, ISO/IEC 22989 standards define biases as the "systematic difference in the treatment of certain objects, people or groups compared to others". By treatment, the standard considers any type of action, including perception, observation, representation, prediction or decision. When we see an inaccurate result, it may be the result of either a bias or an error. Biases in Artificial Intelligence are not simple random errors but obey systematic patterns. As NIST says: "Bias is an effect that deprives a statistical result of representativeness by distorting it, as opposed to a random error, which can distort it on any occasion, but balances out on average". We can therefore say that if there is predetermination or partiality and the result is distorted, there is bias. 3. Discrimination Just as we should not confuse errors with biases, we should not confuse biases with discrimination, which is one of the possible consequences of biases. And we say one of the possible consequences because certain biases are not talked about because they do not produce discrimination, but biases can have effects on decisions that have negative consequences, whether or not they constitute discrimination. The deviation from the truth that occurs in biases can contribute to different results: harmful or discriminatory, neutral, or even beneficial. Let's look at an example: —For Example: In the field of personnel selection, for example, AI systems can have many benefits, but biases can have consequences that can be negative, positive or neutral. A case of negative bias that could lead to discrimination based on sex could be the following: an AI system in which biased training data has been used (for example, the search for a profile that has historically been performed mostly by one sex) will use that bias in the inference phase and - therefore - will produce a result that discriminates against that sex since the fact that they have historically performed that role in one sex does not mean that they will or should perform it better in the future. However, it is also possible that this bias generates a positive result. —For example, if the AI system has been trained with highly qualified profiles, the result (from that point of view) can be positive because it offers more qualified candidates. It is possible that this “positive training bias”, by drawing on historical data from a profession that has historically been skewed towards one sex or social class, can produce a bias that is negative and discriminatory. If we have used a statistical context to distinguish bias from error, we must use a legal context to distinguish bias from discrimination. Not all biases are discriminatory or produce injustices, as stated in this practical guide by Rhite: "Bias refers to a systematic difference in the treatment of certain people or groups, without necessarily implying whether this difference is ‘right’ or ‘wrong’. In contrast, discrimination and fairness introduce a value judgment about the results of biased treatment. A biased AI system can produce results that can be considered ‘discriminatory’ or ‘unfair’, depending on the context and the values applied". Unacceptable discriminatory bias is generally defined by the courts as consisting of unequal treatment, understood in general terms as a decision that treats an individual less favorably than individuals in a similar situation due to a protected status. Characteristic such as race, sex or another trait, or as a disparate impact, which is generally defined as an apparently neutral policy or practice that disproportionately harms a group on a protected trait. Examples include Judgments of the Spanish Constitutional Court (TCE) 1/2021; 253/2004; 181/2000; or those included in the Guide to Article 14 of the European Convention on Human Rights and to Article 1 of Protocol No. 12 to the Convention on Human Rights of the European Court of Human Rights. 4. Exclusion A related but different concept to discrimination is that of exclusion. If discrimination, as we have seen, implies a situation of disadvantage (for example, considering that someone with a non-work-related disability may be a worse candidate than someone without that disability), exclusion is a form of inequality that prevents a person or group from accessing certain services or resources (for example, consider an AI system that does not consider options for vehicles adapted for people with certain disabilities and excludes them). Thus, not all errors are biases and not all biases are negative, nor are all negative biases discriminatory, nor does all discrimination produce exclusion. Discrimination in the field of AI occurs when an AI system treats certain groups or individuals unfairly, which may be due to bias in the data used, in the algorithm and/or in the people who program, nurture and/or supervise it. 5. Equity Finally, there is another different and related concept, which is equity or justice. As set out in the Rhite guide, after indicating that in the context of AI, injustice can be understood as the “unjustified differential treatment that preferentially benefits certain groups over others” (ISO/IEC 22989:2022, 2022) “equity, therefore, is the absence of such unjustified differential treatment or prejudice towards any individual or group”. Equity does not mean that different people or groups should be treated differently, but it is possible that they should be treated differently precisely in order to correct imbalances or incorrect representation that constitute an injustice. Robert Wood Johnson Foundation | RWJF Telefónica Tech Biases in AI (II): Classifying biases February 24, 2025 Telefónica Tech Biases in AI (III): Classification of discrimination March 12, 2025 Telefónica Tech AI Biases (IV): Risk management and impact March 25, 2025 Telefónica Tech AI Biases (V): Introduction of risks in the AI system lifecycle (part 1) April 7, 2025 Telefónica Tech AI Biases (VI): Introduction of risks in the life cycle of AI systems (part 2) April 22, 2025 Telefónica Tech Bias in AI (VII): The human factor May 7, 2025
February 11, 2025