Eduard Chaveli Donet

Eduard Chaveli Donet

Head of Consulting Strategy at Govertis Part of Telefónica Tech

Telefónica Tech
Bias in AI (VII): The human factor
As we conclude this series of articles addressing bias in Artificial Intelligence (AI) systems, this final entry explores the risks rooted in human factors—how humans can be both the source of these biases and a key part of the solution. As discussed throughout the series, human input lies at the core of AI bias, yet paradoxically, the “human element” is also essential for reducing the associated risks. Below we summarise the main types of human error and the oversight mechanisms that can help mitigate them: Programming errors: Developers may introduce unexpected behaviours. Human oversight can involve peer code reviews and thorough testing before systems go live or are deployed. Faulty training data: AI systems may learn from incorrect or insufficient datasets, resulting in biased patterns. Expert review can help ensure data quality and adequacy. Misinterpretation of results: Poor understanding of AI outputs can lead to incorrect decisions. In these cases, comprehensive training for decision-makers and clear documentation of both processes and outcomes can support better-informed, and therefore improved, decision-making. Let’s look at a medical diagnosis example: —An AI system suggests a 75% probability of a disease. If a doctor treats that figure as certainty, they might prescribe an overly aggressive treatment. To prevent this, specialised training and accessible guidance materials should explain what such a probability actually means. This enables doctors to better understand the AI’s output and make more informed decisions. Human error can also lead to AI systems collecting or using personal data in ways that breach data protection laws. Here, specialist roles such as Data Protection Officers play a vital part in ensuring compliance. Of course, there are other examples where human error affects AI outcomes—and where human oversight can help manage these impacts. As we’ve already noted, while the human element is often at the root of bias, it can also be instrumental in reducing risk. This can happen in two key ways: Through human oversight. Via the involvement of relevant stakeholders and professionals. Human oversight According to Article 14 of the AI Act, human oversight is mandatory for high-risk systems, and the extent of this oversight must be proportionate to the level of risk, system autonomy, and context. To that end, the individuals responsible for oversight must be able, “according to the circumstances and proportionately,” to: a) Understand the system’s capabilities and limitations b) Avoid automation bias c) Correctly interpret outputs d) Override or dismiss erroneous results e) Halt the system in case of anomalies In certain cases, this human oversight is reinforced. For instance, critical identification systems, as mentioned in point 1(a) of Annex III of the AI Act, must have two qualified individuals independently validate any AI-based decision. Now that we've examined the human factor in general, let’s focus specifically on its role in AI bias, considering the stage of the AI lifecycle in which it arises and the corresponding potential solutions—drawing on guidance from bodies such as NIST and the Rhite framework: 1. Pre-design Define goals with input from ethics and human rights experts. Avoid abstraction pitfalls (e.g., solutionism). 2. Design and development Adopt a deliberate and cautious approach in collaboration with experts and users. Monitor for construct, labelling, and algorithmic bias. 3. Verification and validation Include diverse user groups in usability testing. Train developers and users to recognise and mitigate bias. Establish continuous feedback mechanisms. Use simulations across multiple contexts. 4. Deployment Ensure the real-world environment aligns with the training context. Monitor for implementation bias. 5. Monitoring and reevaluation Counter human biases such as sunk cost fallacy or status quo bias. Perform regular validations. 6. Decommissioning Even at the decommissioning stage, bias can persist, such as historical or legacy bias, depending on how decision-makers handle system phase-out. Involving experts and stakeholders The individuals or groups responsible for decision-making in AI systems—especially during the pre-design and design stages—may have limited perspectives. To reduce this risk, it's essential to include a diverse range of stakeholders, considering aspects such as race, gender, age, and physical ability. The AIA includes general guidance on this, such as encouraging member states to foster AI development that enhances accessibility, addresses socio-economic inequalities, and supports environmental sustainability. Achieving this requires interdisciplinary cooperation between: AI developers. Experts in inequality and non-discrimination. Accessibility and consumer rights specialists. Environmental, digital, and academic. professionals More concretely, the AIA sets out two specific obligations: Risk management (Recital 65): Providers must document the chosen mitigation measures based on the state of the art and include, “where appropriate,” the input of external experts and stakeholders. Fundamental Rights Impact Assessments (FRAIA) (Recital 96): Especially relevant for the public sector, this process should involve representatives of potentially affected groups, independent experts, or civil society organisations—both in the assessment and in designing mitigation actions. Although the final version of the AIA softens some requirements—for instance, no longer mandating that supervisory authorities be notified or that results be published—maintaining these practices remains a good idea to ensure transparency and build trust. When focusing on expert and specialised roles, some will always be necessary, while others will depend on the specific AI system in question. Some roles will be subject-matter experts, while others will provide a cross-functional perspective based on their domain knowledge. Participation intensity will also vary: some roles will be involved throughout the AI lifecycle, while others will only intervene at specific points (e.g. accessibility reviews or usability testing). One strong example of this grounded approach is offered by the FRAIA📍2 framework, which recommends involving not just the project lead (an obvious inclusion) and the domain expert (i.e., the business owner of the AI system), but also the legal advisor across all phases. In my view, data scientists should also be involved throughout the entire process, as their understanding of AI’s capabilities and limitations is essential for assessing and managing risks to fundamental rights. Ethical advisors also play a growing role—several companies have now appointed ethics officers or committees and adopted additional ethical guidelines, usually aligned with international principles. There’s no doubt that ethics must be considered, though this is not a formal requirement of the AIA. It will ultimately depend on how broad a scope is defined for impact assessments—whether limited to human rights or extended to ethical dimensions as well. Another aspect to consider is whether the AI system is for internal use. In these cases, internal roles and departments must be involved, although external advisers may also be consulted. Stakeholder involvement is always necessary, as noted in Recital 64📍3 of the AI Act, echoing Article 35.9 of the GDPR📍4. In the context of AI, and according to ISO/IEC 42001:2023 and ISO/IEC 22989:2022, stakeholders are defined as “a person or organisation that can affect, be affected by, or perceive themselves to be affected by a decision or activity.” Lastly, if the AI system is developed as a product for customers, the project team should reflect the nature of the service and its users. For example, an AI-based educational assistant (as in the well-known Hello Barbie case) might require input from educational psychologists to ensure the technology responds appropriately to learning environment needs. Conclusión In conclusion, this series has shown that beyond the technical sophistication of AI, it is human commitment, diverse perspectives, and expert-user collaboration that ultimately ensure fairer, safer, and more value-aligned systems. Only through transparent governance, responsible oversight, and inclusive participation can we unlock the full potential of artificial intelligence while upholding the rights and ethical standards that define our society. ______ 1. ANNEX III. High-risk AI systems under Article 6(2) are those that fall within any of the following areas: 1. Biometrics, insofar as their use is permitted by applicable Union or national law: a) Remote biometric identification systems. Excluded are AI systems intended solely for biometric verification purposes, whose only aim is to confirm that a specific natural person is who they claim to be. 2. FRAIA identifies a variety of roles depending on the phase. The roles mentioned include: Interest Group, Management, Citizen panel, CISO or CIO, Communications specialist, Data scientist, Data controller or data source owner, Data protection officer, HR staff member, Domain Expert, Legal Advisor, Algorithm developer, Commissioning client, Project leader, Strategic ethics consultant, and Other project team members. 3. Recital 64a of the AIA states that, when identifying the most appropriate risk management measures, the provider shall document and explain the decisions made and, where appropriate, involve external experts and stakeholders. 4. Article 35.9 of the GDPR: “Where appropriate, the controller shall seek the views of data subjects or their representatives on the intended processing, without prejudice to the protection of public or commercial interests or the security of processing operations.”
May 7, 2025
Telefónica Tech
AI Biases (VI): Introduction of risks in the life cycle of AI systems (part 2)
Following on from the previous article where we explored the initial phases of the AI life cycle, in this second phase we address the phases that begin with the implementation of the AI system: deployment, operation, continuous validation, re-evaluation and, finally, its removal. Phase 4. Deployment or implementation phase In this phase, those responsible for deployment are already working with this technology as they move from development to implementation in the production environment. Implementation bias occurs if the system is implemented in an environment that does not reflect the training conditions, as it may behave in a biased manner. For example, a machine translation system trained mainly with formal texts may not work well with colloquial language. Abstraction traps are also typical of this phase. Implementation bias occurs if the production environment does not reflect training conditions. Phase 5. Operation and monitoring phase At this stage, with the systems in production (operating), constant supervision and adjustments to hardware, software, algorithms, and data are required to maintain optimal performance. In the case of systems that use continuous learning such as virtual assistants and autonomous vehicles, they learn and update continuously based on user interactions and new experiences. This constant learning can increase the risk of introducing or amplifying biases compared to systems based on predefined rules that do not learn continuously. Continuous learning may increase bias risk compared to predefined rules systems. A critical challenge at this stage is the reinforcement feedback loop that occurs when an AI system is retrained with data containing uncorrected biases, perpetuating and amplifying those biases in future decisions, for example, the automation bias that can have a multiplier effect. To this end, continuous feedback mechanisms must be established to identify potential biases and correct them in real-time. Phase 6. Ongoing validation 'Ongoing validation' consists of regularly evaluating the model with new data to see if it is still accurate. Therefore, continuous validation can be carried out in AI systems where continuous learning does not apply, for example, to “detect deviations of data, of concepts or to detect any technical malfunction” (ISO/IEC 5338), but it is especially relevant with new data, making it fundamental in continuous learning scenarios where retraining exists even if it is not explicit. In systems with continuous learning, the models integrate new data continuously without explicit retraining, so it is essential both to check the consistency of the production data with the initial training data and to update the test data itself. The main biases in this phase are thus those of the data, among which the following should be highlighted: representation, selection, measurement, labelling and proxy, so special focus will have to be placed on measures to manage them in this phase. The reinforcement feedback loop perpetuates and amplifies biases in future decisions. Phase 7. Re-evaluation Unlike monitoring and continuous validation, which refer to constant adjustments, each with the purpose we have seen, re-evaluation is a deeper and more exhaustive process. Apart from the biases of evaluation and the traps of abstraction that we already know about, and which in these phases can serve to refine the system with decisions, there are several biases specific to this phase: the fallacy of the sunk cost (continuing to invest resources in a past decision because of the investments already made, even though abandoning it would be more beneficial); or the status quo bias (preference for maintaining the current situation, avoiding change even when the alternatives might be more favorable). ■ In both cases, it is essential that the stakeholders are aware of this, recognize it and make decisions accordingly. Phase 8. Withdrawal Even if it is decided to withdraw the system, which may be for a variety of reasons (it does not serve its purpose, another solution has been found, it is understood that it is not fair, etc.), this can produce a bias known as historical bias, given that the system has been trained with biased historical data that is replicated. —One example is news recommendation algorithms that may be based on the most relevant news, although they may not be the most truthful or verified. Obviously, it will no longer affect the user of that system, but it will affect other users of the AI system who acquire or use it. In conclusion, we can see the importance of identifying the biases that can be introduced in the different phases of the AI life cycle with the aim of correcting and mitigating them. In this sense, in each phase different types of biases can appear that will be treated specifically according to the phase and its type.
April 22, 2025
Telefónica Tech
AI Biases (V): Introduction of risks in the AI system lifecycle (part 1)
In the previous installment of this series, after introducing the concept of bias and its taxonomy—as well as analyzing a number of related or adjacent concepts—we began to address the management of risks associated with bias, starting with its potential impacts. However, when discussing risk management, the first step in any risk management strategy is to identify the risks. In the case of bias, this means identifying the sources of bias, which may be introduced at different stages of an AI system's lifecycle. Therefore, it is crucial to understand how bias enters at various phases of that lifecycle. As stated by the National Institute of Standards and Technology (NIST), organizations that design and develop AI technologies use the AI lifecycle to monitor their processes and ensure that the technology performs as intended. However, they do not necessarily to identify or manage potential risks and harms. Organizations developing AI technologies use the AI lifecycle to ensure functionality, but not necessarily to identify or manage risks and harms. Furthermore, in current approaches to bias, classification tends to be by type (statistical, cognitive, etc.) or by use case and sector (hiring, healthcare, etc.). While this categorization is helpful, it may fail to offer the broader perspective needed to manage bias effectively as a context-specific phenomenon. For this reason, the NIST report proposes an approach to managing and reducing the impact of harmful biases across all contexts, by considering critical points within the stages of an AI system’s lifecycle. Phases of the AI system lifecycle Identifying bias sources is the first step in any bias mitigation strategy. As noted, these sources may be introduced at various points throughout an AI system’s development. Therefore, we must first understand what those phases are—and, in turn, relate them to the different operators involved. The development of an AI system comprises three core phases, as defined by NIST: pre-design, design and development, and testing and evaluation. Meanwhile, ISO 22989 outlines a more granular sequence: initiation, design and development, verification and validation, deployment/implementation, operation and monitoring, continuous validation, re-evaluation, and retirement. ■ ISO 22989 includes additional phases but does not imply that NIST overlooks their importance. Rather, they may be implicitly addressed or incorporated into a broader operational framework. Phase 1. Initiation: Pre-design or definition scope AI systems begin in the pre-design phase, which is foundational for establishing the conditions that will determine the system’s effectiveness and fairness. This phase includes several key milestones: The first step is to clearly define the problem the AI system is intended to solve and establish its objectives. If the system’s objectives are shaped by biased assumptions, the system will inevitably reflect those biases. —For example, if it is decided that a hiring system should prioritize candidates from so-called prestigious universities, this may exclude equally qualified candidates from other institutions. This kind of bias is known as institutional or systemic bias. To mitigate it, it is advisable to scrutinize such assumptions, applying measures that range from reviewing data sources to assessing potential impacts, which requires the involvement of experts in areas such as ethics and human rights. Identifying bias sources is the first step in any bias mitigation strategy. At this stage, functional requirements (what the system should do) and non-functional requirements (how it should behave) are defined. This includes data collection for requirements analysis (determining what data is needed to address the problem) and preliminary data gathering (acquiring initial data to better understand the domain and challenges involved). If the data isn't sufficiently representative of the target population, the model may learn flawed patterns. —For example, training a healthcare system on data from a single region may limit its performance in areas with different characteristics. This situation can lead to abstraction traps, which occur when real-world complexity is oversimplified in the form of inputs and outputs for an AI system. Key abstraction traps include: Formalism traps: the assumption that AI models fully capture real-world complexity. Ripple effect traps: the failure to anticipate how small changes in the system might produce disproportionate downstream consequences. Solutionism traps: the belief that all problems can be solved through technical means alone. If the data are not sufficiently representative of the target population, the model may learn flawed patterns. A technical, economic, and operational feasibility analysis is conducted to determine whether the proposed AI system is viable. Appropriate tools and platforms are selected to support the system development. A detailed plan is prepared, including timelines, resource allocations, and milestone definitions, to ensure efficient and effective project management. Phase 2. Design and development This phase includes the following sub-processes: Design Data understanding and preparation Development At this point, high-impact decisions are made—such as whether to build in-house or buy existing solutions, whether to rely on open-source or proprietary components, and so forth. Given the role of design in determining outcomes, construction validity bias—where a chosen variable fails to accurately represent the concept it's meant to capture—is particularly relevant here. This is especially problematic when a AI system addresses complex problems. —For instance, if socioeconomic status is narrowly equated with income, ignoring other relevant dimensions like education, wealth, occupation, or prestige, the system may operate on a deeply flawed conceptual model. It is therefore essential to include multiple measures of such complex phenomena and to consider culturally diverse interpretations. Data understanding and preparation also occur here. The most common and widely discussed bias at this stage is representation bias. This must be addressed by ensuring correct representation, and using techniques such as sampling where appropriate. Other critical biases in this phase include measurement bias, historical bias, labeling bias, and selection bias. During the development phase, models are built and trained on selected datasets. At the end of the design phase—prior to deployment—a thorough bias mitigation assessment must be carried out to ensure that the system remains within predefined ethical and technical boundaries. The primary type of bias encountered at this stage is algorithmic bias. This bias does not reside in the data itself but in algorithmic logic. For example, a candidate-selection algorithm might assign disproportionate weight to a certain feature—unrelated to actual performance—even when trained on balanced data. There are multiple forms of algorithmic bias. Among those relevant at this stage are aggregation bias, omitted variables bias, and learning bias. As NIST notes, a comprehensive bias mitigation review at the end of this phase should include: Identified bias sources. Implemented mitigation techniques. Performance evaluations before the model is released for implementation. To address these risks, NIST recommends practices such as the “cultural effective challenge”—an internal practice aimed at fostering an environment where technologists actively challenge and interrogate the modeling and engineering steps, with the goal of rooting out statistical and decision-making biases. While we've situated this here, the practice should ideally be iterative across phases. ■ In our view, the implementation of a formal pause, possibly documented in a report—such as in Data Protection Impact Assessments (DPIAs) under the General Data Protection Regulation (GDPR) or in Algorithmic Impact Assessments (FRAIAs) under the Artificial Intelligence Act (AIA)—would be a valuable mechanism for ensuring such a pause actually occurs and has meaningful weight. If, as a result, bias is identified in the algorithm and its potential impact deemed significant, deployment could—and perhaps should—be halted. Phase 3. Testing and evaluation (verification and validation) Testing and evaluation are continuous processes throughout the AI development lifecycle. At this stage: Post-deployment model performance is monitored, and ongoing maintenance is conducted. If evaluation metrics fail to account for fairness, the model may appear accurate while perpetuating or even amplifying pre-existing biases once in production. —For example, a loan recommendation system trained on historical data could continue to discriminate against certain groups if that data reflects past discriminatory practices. Model updates are carried out using new data, with any necessary improvements implemented. If fairness is not considered in updates, they may introduce or reinforce existing biases. —For instance, updating a product recommendation system with recent purchase data reflecting a temporary trend might skew the system toward those products and reduce diversity in recommendations. The NIST report underscores the need for a "culturally effective challenge” to eliminate decision-making bias and continuously improve models. Another issue that may arise here is evaluation bias, which occurs when evaluation procedures or metrics are misaligned with the real-world deployment context. This can lead to inaccurate conclusions about system performance and fairness. It is therefore necessary to adopt measures such as revisiting and adjusting evaluation metrics, benchmarking model outcomes against real-world data, and involving all stakeholders to ensure previously identified issues are resolved satisfactorily. ■ In this article, we examined how bias can be introduced in the early stages of the AI system lifecycle. In the next article of this series, we will explore the remaining phases in greater depth, focusing on the risks and control mechanisms associated with implementation, operation, and eventual reassessment.
April 7, 2025
Telefónica Tech
AI Biases (IV): Risk management and impact
As we have seen in the previous chapters of this series, biases are one of the main risks that AI systems can have. If we take into account that AI-based technology already has and will have even more connections and broader impacts on society than traditional software, as they are sure to gain capillarity in almost all sectors and contexts, either directly or in instrumental processes, this implies, from the outset and in general, a multiplying effect of the possible risks arising from AI biases. Having said this, it is essential to turn to the concept of risk and its measurement in order to advance in the relationship between biases and risks. As we know, the way to measure risk is to take into account the probability of a threat mateAIRlizing combined with the impact it would have if it were to occur. There are therefore two factors in the equation: probability and impact. In AI systems, biases are one of the main risks. 1. Impacts produced by IA systems and their calculation Focusing on the calculation of impact, practices based on criteAIR accepted as global standards should be taken into account for the specificities of AIR. In this case ISO 31000:2009 Risk Management - Principles and Guidelines, from which the rest of the ISO standards are inspired and adapted to specific environments, in addition to the specific ISOs in IA, particularly ISO/IEC 23894:2023, guidance on risk management in IA. Particularly relevant is the work developed by the Massachusetts Institute of Technology (MIT) with the AI Risk Repository, framework, which offers a dynamic database compiling more than 1000 AI risks, drawn from 56 frameworks. The concept of impact in the field of AI systems is used on different fronts: for risk analysis in AI systems in Article 9 AIR, for FRAIA in Article 29 bis; and there are also examples such as the pioneering Law 1/2022, of April 13, on Transparency and Good Governance of the Valencian Community that, in addressing when it is considered an essential element for whether or not active publicity of AI systems should be made, refers to the fact that “they have an impact on administrative procedures or the provision of public services”. The impact of AI technology on society is greater than that of traditional software, which amplifies the risks associated with its biases. Based on the above, the impacts that AI may have on people should be analyzed, when it is used in accordance with its intended purpose and also, when it is put to a reasonably foreseeable misuse. These impacts can be classified in different ways, such as, for example: a) Legal, affecting Fundamental Rights such as, for example: Discrimination: biases can perpetuate discrimination in different activities in both the public and private spheres. Inequality: biased algorithms can aggravate inequality instead of reducing it. Injustice: legal or significant effects on individuals through denial of subsidies, suspicions of non-compliance with existing regulations etc. b) Social, such as: Loss of self-judgment, self-confidence. Perpetuation of biases: structural risks. Loss of confidence in one's own technology. 2. Lack of explainability and its impact on the management of biases Explainability is the ability to understand and thus to be able to explain how an AI system makes its decisions. Therefore, if an AI system is not explainable, it is difficult to identify possible biases and manage them. We refer to systems such as, for example: Those of deep neural networks or deep learning where these layers of neurons create a black box where it is difficult to understand how a specific decision has been reached. Think for example of an image recognition model in which the model classifies an animal, but we do not know what specific elements or characteristics it uses to do so. In cases linked to face recognition, this has led to discrimination cases. Self-learning systems, among which are for example reinforcement learning systems (in which the agent learns to make optimal decisions by interacting with the environment and receiving rewards) and where there are strategies used that can be difficult to understand and explain. Think for example of an autonomous car that in a situation where there is no visible obstacle decides to stop because its sensors have identified one. In data classification models, such as those that classify operations as fraudulent or not and in which the patterns used can be complex to understand. In these cases, if the system erroneously classifies as illegitimate the operations of certain groups (for example, taking into account their demographic location or some other element), it may be discriminatory. The relationship between biases and risks must be addressed in terms of risk and its measurement. Notwithstanding the above, there are several techniques to solve the lack of explainability, such as, for instance: Interpretable models: this consists of using AI models that are inherently easier to interpret, such as decision trees, linear regressions and simple neural networks and that allow understanding how decisions are made based on the inputs by applying clear rules and constraints to guide their operation and ensure that decisions are understandable and justifiable. Post-hoc methods: this involves applying techniques that explain the decisions of complex models after they have been made, such as LIME (Local Interpretable Model-agnostic Explanations) that generates local explanations for individual predictions or SHAP (SHapley Additive exPlanations) that assigns importance values to each input feature, based on game theory. Education and training: train teams in understanding and managing AI decisions. This includes training in the use of explainability tools and interpretation of results. Audits and evaluations: conduct regular audits and external evaluations to review and validate AI system decisions, ensuring that they are transparent and equitable. In assessing AI's impact, we must account for both its intended uses and its reasonable foreseeable misuses. 3. Criteria for measuring impact Standards are not oblivious to the potential impacts and certain legal provisions already provide criteAIR for this. Article 22 of the General Data Protection Regulation (GDPR) prohibits automated decisions with significant legal effects (such as credit denials or criminal risk assessments) without relevant human intervention. Algorithmic biases affecting fundamental rights, such as equal access to public services could be subject to legal challenge. The SCHUFA judgment of the Court of Justice of the EU expanded this concept, adopting a guaranteeing criterion, by considering that even the automatic generation of a predictive value (such as a credit score) constitutes an automated decision if it has a decisive influence on the final decision of a third party. This criterion makes it necessary to re-evaluate systems that combine automatic processing with superficial human review. In this sense, the CJEU expands, through a broad interpretation of Article 22, the scope of the term automated decisions to include the following cases: (i) “semi-automated” decisions based on the automated decision, but with greater or lesser human participation; (ii) predictions of probabilities or profiling that are configured as determinant for the adoption of the decision by a third party. Meanwhile, Article 14 of the EU AI Act requires that human intervention be meaningful, avoiding blind automation. This is not just a superficial review, but a real and substantive assessment of all relevant factors. In assessing the impact, special consideration must be given to, on the one hand, the purpose of the use or purpose of the AI system. It is a good starting point to follow the AIR's “high risk” uses of AI. As we know, the AIR has a risk approach and it is already clear that it takes into account biases as elements to be considered. As an example, recital 61 expressly states: “In particular, in order to address the risk of potential biases, errors and opacities, it is appropAIRte to classify as high risk those IA systems intended to be used by or on behalf of a judicial authority to assist judicial authorities in investigating and interpreting facts and law and in applying the law to specific facts…”. High-risk systems in the AIR (Article 6) will occur in two scenarios: Either when a security component or product is involved. Or when it is one of the high-risk IA systems referred to in Annex III. Also, if an AI system falls within one of the areas of Annex III, it may not be considered high risk if its influence on decision making is not substantial and one or more of the following conditions are met: the system performs a limited procedural task, enhances previous human activities, detects patterns of decision or deviation without replacing human assessment, or performs preparatory tasks. The list of high-risk AI systems is based on criteria such as autonomy and complexity, social and personal impact and safety in critical infrastructures. Annex III of the AIR lists the systems that are considered high-risk. All of the above would also have to be grounded to the use in the public or private sector, where there may be elements that make the risk “weighted” and even recalibrated. In the case of the public sector, the proposal made by Cotino in relation to the criteria and variables to determine the impact, level of risk and legal relevance of public algorithmic systems, outlines the following criteria: That it produces legal effects on him or significantly affects him, a normative criterion that follows, for example, another norm such as art. 22 of the GDPR. That the system takes individualized decisions regarding individuals, involves the making of internal administrative decisions or for the development of policies and their collective impact. That they are high-risk systems, being critical of the list of systems catalogued as high-risk by the IA because the aforementioned author considers it incomplete due to the existence of public IA systems that have a particularly high impact but are not included in the aforementioned list. That they are mass uses, which implies that “the danger of a massive error or bias in countless future cases will have to be weighed, as well as the significant benefit of avoiding its replication in thousands or millions of decisions” and concludes that therefore for public high-risk AI systems that apply massively it is necessary to “recalibrate” these acceptable thresholds and the applicable safeguards. And he concludes - a view with which I agree - that for such systems “in general we need to be much less tolerant”. When measuring risk, it is necessary to consider the probability of a threat materializing along with its impact. In the case of the private sector, some criteria can be applied with corresponding peculiarities: That they produce significant legal effects: as in the public sphere, private IA systems that produce significant legal effects or considerably affect individuals should be evaluated with greater rigor, as is the case with decisions that may impact fundamental rights, as is the case for example with access to financial services, employment, or housing. That the system makes individualized decisions: AI systems that make individualized decisions about individuals, such as in hiring processes, credit evaluation, or personalization of services, must be carefully monitored to ensure transparency and fairness, so that automated decisions must be explainable and justifiable. High-risk systems: in the private sector, high-risk AI systems may refer, for example, to those used in critical sectors such as healthcare, finance, and transport. MIT is developing a dynamic database with more than 1,000 AI risks called the AI Risk Repository. In conclusion, biases in Artificial Intelligence constitute a multidimensional challenge that requires a rigorous approach, combining normative, ethical and technical analysis. Explainability not only facilitates the detection of biases, but is also an essential legal requirement. The legal frameworks and standards mentioned above provide clear tools for managing risks, although it should be noted that their success will depend on effective implementation and interdisciplinary collaboration between legal, technical and ethical experts.
March 25, 2025
Telefónica Tech
Biases in AI (III): Classification of discrimination
In the previous chapters of this series we have been able to analyze the concept of biases and other related concepts and the classification of biases in relation to AI, and it is appropriate to address in this installment the classification of discrimination in order to, from there, understand the risks and be able to deal with them. 1. Right to nondiscrimination Non-discrimination is articulated as a basic principle of the Universal Declaration of Human Rights, adopted by the United Nations General Assembly in 1948. Within the framework of the United Nations, two important legal instruments adopted in 1966, the Covenant on Social, Economic and Cultural Rights and the International Covenant on Civil and Political Rights, article 26 of which establishes non-discrimination as an autonomous and general right, should also be highlighted. There are other United Nations Conventions whose purpose is to prevent discrimination in various fields such as race, religion or belief, discrimination against people with disabilities, discrimination in the workplace or discrimination against persons on the basis of age. There are countries at the international level that recognize other attributes as protected grounds, such as ethnic or social origin, pregnancy, genetic characteristics, language, belonging to a national minority, property and birth. These are the cases, among others, of the USA, Canada and Australia. Article 19 of the Treaty on the Functioning of the European Union lists several protected grounds, including sex, racial or ethnic origin, religion or belief, disability, age, and sexual orientation. Based on these, various EU directives have been adopted that focus on ensuring equal treatment in all Member States. Article 21 of the Charter of Fundamental Rights of the European Union prohibits discrimination on any grounds, including other grounds such as genetic characteristics, language, membership of a national minority, property and birth. At the national level, some European countries, such as the Netherlands, have extended their lists of protected grounds to cover more areas than those covered by the Treaty. In Spain, Article 14 of the 1978 Constitution proclaims the right to equality and non-discrimination, citing birth, race, sex, religion or opinion as particularly objectionable grounds, and prohibiting discrimination on the basis of any other personal or social circumstance. Law 15/2022 of July 12, 2022, which entered into force on July 14, 2022, details more grounds for possible discrimination in its Article 2, which defines the subjective scope of application: birth, racial or ethnic origin, sex, religion, conviction or opinion, age, disability, sexual orientation or identity, gender expression, disease or health condition, serological status and/or genetic predisposition to suffer pathologies and disorders, language, socioeconomic status, or any other personal or social condition or circumstance. In its objective scope (article 3.1.) it mentions the areas in which it is applicable, including in its letter “o” the “Artificial Intelligence and massive data management, as well as other areas of analogous significance”. On the other hand, this law is made up of five titles: I. The first title establishes a series of definitions and includes the right to equal treatment and non-discrimination and subsequently deals with situations in different areas of “political, economic, cultural and social life”. The second chapter in this title regulates this rights in specific areas, including artificial intelligence and automated decision-making mechanisms. II. The second title establishes measures for the promotion of equal treatment and affirmative action measures. III. The third title is devoted exclusively to the creation and establishment of the Independent Authority for Equal Treatment and Non-Discrimination. IV. The fourth title establishes the infringements and sanctions in the field of equal treatment. V. The fifth and last title establishes a series of measures in relation to care, support and information for victims of discrimination and intolerance. ■ This Law is an important step within the Spanish legal framework since it emphasizes discrimination more clearly, and for the first time within the Spanish legal system, gives greater relevance to age discrimination. In addition, it not only proclaims rights, but also creates mechanisms aimed at protecting the victims of discrimination in its multiple dimensions. 2. Types of discrimination In order to classify discrimination, we follow the classification made by the aforementioned Law 15/2022 in its Article 6: a. Direct discrimination (art. 6. 1º Organic Law 3/2007, of March 22nd and art. 6. 1º Law 15/2022): occurs when a person receives less favorable treatment due to any of the circumstances especially suspected of discrimination (race, gender, age, etc.). This will occur both when data on membership of a particularly discriminated group is entered into the AI system and a negative factor is associated with such membership, and if the algorithms and variables are designed to disadvantage these groups. —If an AI recruitment system, for example, directly rejects candidates of a certain ethnicity, race or age without taking into account other characteristics, it may be possible to reject candidates of a certain ethnicity, race or age without taking into account other characteristics of the candidate. b. Indirect discrimination: occurs when an apparently neutral provision, criterion or practice causes or is likely to cause one or more people a particular disadvantage with respect to other people. —Consider, for example, an AI system for granting credit that uses an apparently neutral element, such as the postal code, but indirectly disadvantages people living in certain neighborhoods or areas where certain ethnic groups or groups live. c. Discrimination by association: this occurs when a person or group, because of his or her relationship with another person with one of the causes of discrimination, is subjected to discriminatory treatment. —Imagine, for instance, an AI system used for performance evaluation that penalizes employees who have a family member with a chronic illness. d. Discrimination by mistake: is that which “is based on an incorrect assessment of the characteristics of the person or persons discriminated against” (art. 6. 2º b Law 15/2022). That is, because a protected characteristic is wrongly attributed to him/her and penalizes him/her. —For example, a facial recognition AI system that denies access to a person because it wrongly associates him to a certain ethnicity to which it denies access. To these cases of discrimination, we can add what we could call “aggravated” cases, which are as follows: i. Harassment: when discrimination is intended to create an intimidating, hostile, degrading, humiliating or offensive environment for the person or group that has such protected characteristic. —Imagine a chatbot that, taking into account the gender of users, makes insulting comments to people of that gender, due to bias. ii. Retaliation: when a person suffers retaliation for having filed a complaint or having participated in a process related to discrimination. —Consider an AI system used for employee promotion that penalizes employees who have filed complaints or participated in such processes when they are promoted. Throughout this chapter we have reviewed how the right to equal treatment and non-discrimination fits into the international, regional and local spheres, and then made a classification based on the provisions of Law 15/2022 in Article 6, which includes: direct, indirect, by association, by mistake, multiple or intersectional discrimination. In short, the importance of identifying the types of discrimination that may arise as a result of bias is that it allows us to understand the possible risks and be able to deal with them. ■ This third chapter of this series of articles has completed a line that allows us to have a better understanding of the importance of identifying and classifying the biases that tend to be part of AI systems and thus the possible consequences for individuals and society in general.
March 12, 2025
Telefónica Tech
Biases in AI (II): Classifying biases
We have already seen in the first chapter of this series the concept of biases and other related concepts, such as a particularly relevant one: discrimination. Once these concepts have been analyzed, it is essential to identify and classify them in order to be able to deal with the risks involved. Let's start with biases. The National Institute of Standards and Technology (NIST)—the U.S. government agency in charge of promoting innovation and industrial competition—published NIST Special Publication 1270 in March 2022, entitled Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, which aims to provide guidance for future standards in identifying and managing biases. This document: On the one hand it describes the stakes and challenge of bias in AI and provides examples of how and why it can undermine public trust. On the other hand, it identifies three categories of bias in AI (systemic, statistical, and human) and describes how and where they contribute to harm. And finally, it describes three major challenges to mitigating bias: datasets, testing and evaluation, and human factors, and introduces preliminary guidance for addressing them. As we were saying and according to NIST, there are three types of biases that affect AI-based systems: computational or statistical biases, human biases, and systemic biases. Source: https://doi.org/10.6028/NIST.SP.1270 We are going to develop these three types of biases indicated by NIST, complementing this classification with more detail and our own points of view. 1. Computational biases These are the ones that we can measure from a trained Artificial Intelligence model. They are the tip of the iceberg and, although measures can be taken to correct them, they do not constitute all the sources of possible biases involved. Statistical or computational biases arise from errors that result when the sample is not representative of the population. These biases arise from systematic rather than random factors. The error can occur without prejudice, bias, or discriminatory intent. These biases are present in AI systems in the data sets and algorithmic processes used in the development of AI applications and often arise when algorithms are trained on one type of data and cannot extrapolate beyond that data. Here we can find several types in turn a. Data Selection Bias: occurs when the data used to train a model does not fairly represent reality. —For example, if a hiring algorithm is trained primarily on male resumes, it could bias decisions toward male candidates. b. Algorithmic Bias: occurs when algorithms favor certain outcomes or certain groups. —For example, a credit system that automatically penalizes low-income people because of their financial history could have an algorithmic bias. 2. Human biases These are those we each have implicitly that affect how we perceive and interpret the information we receive. Human biases reflect systematic errors in human thinking based on a limited number of heuristic principles and value prediction to simpler judgmental operations. They are also known as cognitive biases as they are related to the way in which human beings process information and how we make decisions based on it. Within human bias we can find several categories: a. Confirmation Bias: occurs when the AI system relies on pre-existing beliefs or assumptions in the data, as there is a tendency to search for, interpret and remember data that reinforces those pre-existing beliefs or opinions. —For example, if a movie recommendation algorithm only suggests specific genres to a user, it might reinforce their previous preferences. b. Anchoring Bias: occurs when there is an over-reliance on the first piece of information or anchor. —For example, if an online pricing system displays a high initial price, users may perceive any price below that initial anchor as a 'bargain' or cheap price. c. Halo effect: valuing a person or thing in terms of a salient characteristic. — For example, assuming that a candidate with a prestigious university on his or her resume is automatically more competent. d. Negativity Bias: occurs when more weight is given to negative information than to positive information. —For example, imagine a fraud detection system might be more likely to identify false positives due to this bias derived from being trained with 'negative' information. These human biases are introduced into AI systems in several ways: During their development (e.g., by programming a credit system by introducing existing human biases). In training (e.g., by using male training data in a selection process). In labeling (if there are errors or biases in human labeling that introduce such bias into the data). For loss and optimization functions (if it penalizes some errors more than others it may also penalize certain groups or certain characteristics 3. Systemic biases And finally, systemic biases are those that are embedded in society and institutions for historical reasons. It need not be the result of any conscious bias or prejudice but rather of the majority following existing rules or norms. These biases are present in the data sets used in AI, and institutional norms, practices and processes throughout the AI life cycle and in broader culture and society. Racism and sexism are the most common examples. In turn, Rhite's practical guide distinguishes the two main categories of bias that exist: On the one hand, social bias, related to prejudices, stereotypes or inclinations rooted in a culture or society, of which historical bias is an example. And on the other hand, statistical bias, which involves a systematic difference between an estimated parameter in the data and its actual value in the real world and occurs when the data fail to accurately capture the expected variables or phenomena, leading to faulty AI results, such as representation bias and measurement bias. Likewise, he adds cognitive biases that: "They are systematic errors in thinking that can affect judgment and decision making". Among these is the most common example of confirmation bias, in which people tend to look for or give more weight to data that confirm their pre-existing ideas or hypotheses. In turn, the RIA in Article 14.4.b) includes the automation bias, which consists of. "Possible tendency to rely automatically or excessively on the output results generated by an AI system (...)". This type of bias is closely related to the confirmation and anchoring biases, categories belonging to the cognitive/human biases: Regarding confirmation bias, the tendency to seek information that confirms our prior beliefs may reinforce confidence in the results of AI systems, even when they make mistakes. Concerning the anchoring bias, over-reliance on initial information could lead to a predisposition to automatically trust the AI system's suggestions (which would function as an 'anchor'). Source: Rhite. Conclusion The identification and classification of the biases that affect AI is of relevant importance in order to prevent and manage the risks derived from those biases that can cause harm and violate the rights and freedoms of individuals. In this sense, in this chapter we have identified the main categories of biases with implications in AI systems, these being: computational biases that come from errors in the results of AI systems; human biases that come from the people involved in the processes of an AI system (programming, classification, etc.); and systemic biases that affect society as a whole. ■ In the next chapter we will delve into the classification of types of discrimination. MORE IN THIS SERIES Telefónica Tech Bias in Artificial Intelligence (I): The necessary distinction between biases and related concepts February 11, 2025
February 24, 2025
Telefónica Tech
Bias in Artificial Intelligence (I): The necessary distinction between biases and related concepts
Artificial Intelligence (AI) brings with it many opportunities and also, as is normal, risks that must be managed, and one of them has to do with biases and the possible consequences they can have, such as discrimination. In the following series of chapters, we are going to address various issues around bias, but we think it is essential to start by clarifying concepts. We must not only understand the concept of bias but also distinguish it from other “similar” concepts and relate them to AI systems. 1. The algorithm as part of an AI system According to the RAE (Royal Spanish Academy), an algorithm is an “ordered and finite set of operations that allows the solution of a problem to be found”. We could say that it is like a recipe, but unlike a recipe, which is applied for a specific purpose (cooking), the algorithm can be used for many different purposes. That is to say: they are precise instructions that, based on inputs and through a general process, generate consistent results (outputs). In computing, it is common to find examples of algorithms, such as the binary search used by search engines, the PageRank designed by Google to determine the position of a web page based on the quantity and quality of the links that direct to it, or those of AI systems. Regarding the definition of AI, although there were attempts, prior to the publication of the Regulation on Artificial Intelligence (RIA), to establish a specific definition for this concept, the following definition has finally been adopted in Article 3.1 of the RIA: “A machine-based system designed to operate with different levels of autonomy, which can show the ability to adapt after deployment and which, for explicit or implicit objectives, infers from the input information it receives the way to generate output information, such as predictions, content, recommendations or decisions, which can influence physical or virtual environments”. For its part, the European Commission has recently published guidelines on the definition of an IA system, explaining the practical application of the legal concepts contained in the RIA. A third concept, related to but distinct from the AI system, is that of an AI model. In the ISO/IEC 22989 standards (AI concepts and terminology) a model is defined as the "Physical, mathematical or logical representation of a system, entity, phenomenon, process or data”. For its part, the Organization for Economic Co-operation and Development (OECD), in its Artificial Intelligence Paper No. 8, describes AI models as "A central component of an AI system used to make inferences from inputs to produce outputs [that] include, among others, statistical models and various types of input-output functions (such as decision trees and neural networks)". Therefore, and in summary: All AI systems use algorithms, which are part of an AI system. However, an AI system has other elements: hardware, data, etc. 2. Concept of biases Now we are going to focus on biases, which can and often are part of an AI system and can also be generated by the algorithm itself. To do this, we must now address the concept of biases, as well as other related but distinct concepts. The Royal Spanish Academy defines bias as related to 'tendentious' information and this in turn as "manifesting partiality, obeying a certain trend or idea”. For its part, and as Carlos B. Fernández indicates in the article Tools to eliminate or reduce bias in automated decision-making systems, the International Organization for Standardization (ISO) defines bias as "the degree to which a reference value deviates from the truth". "In this context, an AI system is said to be biased when it exhibits systematically inaccurate behavior". In turn, ISO/IEC 22989 standards define biases as the "systematic difference in the treatment of certain objects, people or groups compared to others". By treatment, the standard considers any type of action, including perception, observation, representation, prediction or decision. When we see an inaccurate result, it may be the result of either a bias or an error. Biases in Artificial Intelligence are not simple random errors but obey systematic patterns. As NIST says: "Bias is an effect that deprives a statistical result of representativeness by distorting it, as opposed to a random error, which can distort it on any occasion, but balances out on average". We can therefore say that if there is predetermination or partiality and the result is distorted, there is bias. 3. Discrimination Just as we should not confuse errors with biases, we should not confuse biases with discrimination, which is one of the possible consequences of biases. And we say one of the possible consequences because certain biases are not talked about because they do not produce discrimination, but biases can have effects on decisions that have negative consequences, whether or not they constitute discrimination. The deviation from the truth that occurs in biases can contribute to different results: harmful or discriminatory, neutral, or even beneficial. Let's look at an example: —For Example: In the field of personnel selection, for example, AI systems can have many benefits, but biases can have consequences that can be negative, positive or neutral. A case of negative bias that could lead to discrimination based on sex could be the following: an AI system in which biased training data has been used (for example, the search for a profile that has historically been performed mostly by one sex) will use that bias in the inference phase and - therefore - will produce a result that discriminates against that sex since the fact that they have historically performed that role in one sex does not mean that they will or should perform it better in the future. However, it is also possible that this bias generates a positive result. —For example, if the AI system has been trained with highly qualified profiles, the result (from that point of view) can be positive because it offers more qualified candidates. It is possible that this “positive training bias”, by drawing on historical data from a profession that has historically been skewed towards one sex or social class, can produce a bias that is negative and discriminatory. If we have used a statistical context to distinguish bias from error, we must use a legal context to distinguish bias from discrimination. Not all biases are discriminatory or produce injustices, as stated in this practical guide by Rhite: "Bias refers to a systematic difference in the treatment of certain people or groups, without necessarily implying whether this difference is ‘right’ or ‘wrong’. In contrast, discrimination and fairness introduce a value judgment about the results of biased treatment. A biased AI system can produce results that can be considered ‘discriminatory’ or ‘unfair’, depending on the context and the values applied". Unacceptable discriminatory bias is generally defined by the courts as consisting of unequal treatment, understood in general terms as a decision that treats an individual less favorably than individuals in a similar situation due to a protected status. Characteristic such as race, sex or another trait, or as a disparate impact, which is generally defined as an apparently neutral policy or practice that disproportionately harms a group on a protected trait. Examples include Judgments of the Spanish Constitutional Court (TCE) 1/2021; 253/2004; 181/2000; or those included in the Guide to Article 14 of the European Convention on Human Rights and to Article 1 of Protocol No. 12 to the Convention on Human Rights of the European Court of Human Rights. 4. Exclusion A related but different concept to discrimination is that of exclusion. If discrimination, as we have seen, implies a situation of disadvantage (for example, considering that someone with a non-work-related disability may be a worse candidate than someone without that disability), exclusion is a form of inequality that prevents a person or group from accessing certain services or resources (for example, consider an AI system that does not consider options for vehicles adapted for people with certain disabilities and excludes them). Thus, not all errors are biases and not all biases are negative, nor are all negative biases discriminatory, nor does all discrimination produce exclusion. Discrimination in the field of AI occurs when an AI system treats certain groups or individuals unfairly, which may be due to bias in the data used, in the algorithm and/or in the people who program, nurture and/or supervise it. 5. Equity Finally, there is another different and related concept, which is equity or justice. As set out in the Rhite guide, after indicating that in the context of AI, injustice can be understood as the “unjustified differential treatment that preferentially benefits certain groups over others” (ISO/IEC 22989:2022, 2022) “equity, therefore, is the absence of such unjustified differential treatment or prejudice towards any individual or group”. Equity does not mean that different people or groups should be treated differently, but it is possible that they should be treated differently precisely in order to correct imbalances or incorrect representation that constitute an injustice. Robert Wood Johnson Foundation | RWJF Telefónica Tech Biases in AI (II): Classifying biases February 24, 2025 Telefónica Tech Biases in AI (III): Classification of discrimination March 12, 2025 Telefónica Tech AI Biases (IV): Risk management and impact March 25, 2025 Telefónica Tech AI Biases (V): Introduction of risks in the AI system lifecycle (part 1) April 7, 2025 Telefónica Tech AI Biases (VI): Introduction of risks in the life cycle of AI systems (part 2) April 22, 2025 Telefónica Tech Bias in AI (VII): The human factor May 7, 2025
February 11, 2025
Cyber Security
AI & Data
Fundamental rights impact assessments on high-risk AI systems in the RIA
1. Introduction The Artificial Intelligence Regulation (AIR) states among its purposes "to improve the functioning of the internal market" and "to promote the adoption of human-centered and trustworthy artificial intelligence" and, it adds, that this should be "while ensuring a high level of protection of health, safety and fundamental rights as enshrined in the Charter". This is why the reference to fundamental rights is a constant in the article. The purpose of this article is to take a high-level approach to one of the obligations that the RIA provides for managing the risks that Artificial Intelligence (AI) may pose to fundamental rights: the Fundamental Rights Impact Assessments on certain high-risk AI systems (also known as Algorithmic Impact Assessments, AIA; or fundamental rights and algorithm impact assessment, FRAIA, the abbreviation we will use). These are intended to enable the deployer to identify specific risks to the rights of individuals or groups of individuals who may be affected, and to identify measures to be taken in the event that these risks materialize. We will not address the various intersections with data protection that will be the subject of other articles in this series, some of which also concern FRAIAs and PIAs (Privacy Impact Assessment). Although the recent emergence of AI has led to the need for the adoption of a regulation at European level (the RIA, of which we have already seen an introduction in the first article of this series dedicated to it, "An introduction to the Regulation on Artificial Intelligence (RIA)"), human rights have existed for some time and there are precedents of impact assessments on human rights (HRIA), social impact assessments (SIA), ethical impact assessments (SIA), as well as on some specific rights, such as the widely known example of the protection of personal data. There have also already been methodologies and tools specifically applied to IA systems, but the RIA has opted for a specific model in its Article 27: FRAIAs. The RIA is based on a risk-based approach that consists - in short - of: "prohibiting certain unacceptable artificial intelligence practices, establishing requirements for high-risk AI systems and obligations for the relevant operators, and establishing transparency obligations for certain AI systems". The risk management system established for any High-Risk AI solution (Article 9 RIA), requires the provider to have the risks documented, but obviously they may not be grounded to the specific use case of the company deploying them, but at the level of what is reasonably foreseeable and to be updated according to market monitoring. On the other hand, the FRAIA (Article 27) do seem to be aimed at analyzing the risks to the specific use case, although only in connection with the exercise of public functions and other specific cases that will be indicated later on. Having made this introduction, let us focus on the FRAIAs. 2. Obliged to carry out FRAIAs Regarding those obliged to carry it out, the RIA clarifies that this obligation applies to certain specific deployment managers: On the one hand it must be carried out by Public Law bodies (important to take into account Laws 39 and 40/2015) in respect of all AI systems that are high risk. On the other hand, private operators providing public services (again important to consider Laws 39 and 40/2015) with respect to AI systems referred to those public services. In fact, Recital 96 gives some examples but that cannot be understood as a numerus clausus. And on the other hand (regardless of the public or private nature of such entities) and by reason of the purpose of the systems, those responsible for the deployment of high-risk systems referred to in point 5 letters b) and c) of Annex III which are: AI systems intended to be used to assess the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud. Recital 96 gives "banking or insurance institutions" as examples. AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance. 3. When should the FRAIA be performed? Article 27(1) of the RIA states that the FRAIA must be performed "before deploying a high-risk AI system" and Article 27(2) adds that the FRAIA "applies to the first use of the high-risk AI system". Article 27, paragraph two adds: "If, during the use of the high-risk AI system, the deployer considers that any of the factors listed in paragraph 1 change or are no longer up to date, the deployer shall take the necessary steps to update the information”. 4. Proposed steps to be taken in a FRAIA We will now comment briefly on a series of suggested steps to perform a FRAIA which is, in itself, a process that has several phases arranged in a PDCA (acronym referring to the plan-do-check-act cycle) and which in turn can be integrated into the action plan in an artificial intelligence management system; and this, in turn, can and usually will be integrated into other management systems, for example, in an information security management system (such as an ISO 27001 or an ENS). Here, however, we refer to the PDCA itself, which constitutes the FRAIA itself. Preliminary Phase: A PreFRAIA? The need to delimit the scope of the possible rights affected by the IA system leads us to the convenience of carrying out a PreFRAIA, similar to how in data protection we have been carrying out prePIAs to, not only delimit the rights affected, but even consider whether the relevance or excessive weight of any of them in the project advises carrying out a specific impact assessment on that matter, separate from the general FRAIA; without prejudice to the fact that in my opinion we should try to try to carry out integrated FRAIAs. Phase 1: Preliminary analysis of the need for a FRAIA and specification of the systems and rights affected (initial scoping). First of all, we will have an inventory of the systems within the scope. As for the systems to be subject to FRAIA, they are the high-risk ones. As for the analysis of the fundamental rights to be focused on, the assessment can be performed on the basis of knowledge of the IA System, the RIA and at the operational level with a check list without much field work and stakeholder involvement. Phase 2: Context, planning and in-scope detail Before starting a FRAIA we must have the necessary information about the context of the IA system, determine the team that will carry it out, the methodology and the sources of requirements to be used. Also, although not required by the RIA, given the possibility of conducting a DPA (Data Protection Impact Assessment) in conjunction with a FRAIA, at least a description of the processing of personal data should be available. Step 3: Necessity, proportionality, and data quality Unlike high-risk AI systems processing personal data, which entail an analysis on necessity and proportionality, the RIA does not contemplate this obligation in FRAIAs. However, in my opinion, as has been done by some methodologies, in all high-risk AI systems (whether they process personal data or not) it does require that there is a "moment" in which necessity and proportionality are analyzed, assessing the aspects that have been taken into account to implement the AI system, considering aspects such as: what criteria have been adopted to make the decision to use such high-risk AI systems, why it has been decided precisely that AI and not another, list non-algorithmic alternatives that have been considered, that a prior approximation has been made on the benefits and sacrifices involved (weighting that has great differences between the public and private spheres), etc. Phase 4. Risk management Risk management is the central part of any impact assessment and, therefore, also of the FRAIA. Depending on the nature of the risk, safeguards or controls must be adopted, which may incorporate measures to lower the initial risk to the acceptable risk threshold, so that the conclusion to be reached by the FRAIA is whether, given the initial risks, by applying the appropriate measures or controls, the residual risk will be able to be lower than the acceptable risk. 5. Communication to the authority Once the FRAIA has been carried out "the person responsible for the deployment shall notify the market surveillance authority of the results of the assessment, submitting the completed template referred to in paragraph 5 as part of the notification" (Article 27.3). 6. Possible communication to stakeholders and publication The RIA does not contemplate communication to stakeholders or publication of FRAIA results, but if the aim is to achieve trust, transparency and, in the case of public sector and citizen participation, perhaps publication and communication to stakeholders should be considered good practice, which could contribute to improving systems such as reducing bias. As there may be information that either due to business confidentiality, intellectual property or security issues may not want or should not be published, a summarized or suppressed version could be published. ◾ CONTINUING THIS SERIES Cyber Security AI of Things The application scope of the Artificial Intelligence Regulation (AIR) April 16, 2024 Cyber Security AI of Things AI practices forbidden in the Artificial Intelligence Regulation (AIR) May 2, 2024 Cyber Security AI of Things High-Risk AI systems in the Artificial Intelligence Regulation (AIR) May 9, 2024 Image by DC Studio / Freepik.
May 20, 2024
Cyber Security
AI & Data
Generative AI as part of business strategy and leadership
The arrival of Generative Artificial Intelligence (AI) has been a turning point at all levels, not only because of the possibilities it offers but also because of its mass accessibility. Practically all major technology companies are currently working on Generative AI solutions and it is even integrated in several products (e.g. Bard in Google and ChatGPT in Bing and Office). The great feature of Generative AI is the ability to create new content, to provide a solution or result from other problems or situations, without being specifically taught or trained on that problem. The question for businesses should not be whether to adopt Generative AI, but how to adopt it. We have already talked about the paradigm shift that Generative AI represents for both society and industry. A study by KPMG states that 77% of business leaders believe that Generative AI is the emerging technology that will have the biggest impact on business in the next 3-5 years, ahead of other technological capabilities such as advanced robotics, quantum computing, augmented reality/virtual reality (AR/VR), 5G, and Blockchain. The study also shows that despite all the excitement around generative AI, most business leaders do not feel ready to embrace the technology or realise its full potential. It indicates that 69% expect to spend the next 6-12 months focused on increasing understanding of the goals and strategies for generative adoption as a top priority. In this article, we will focus on the issues that businesses need to consider when making the decision to use these types of Generative AI-based tools in order to reap the benefits but manage the risks that come with them. Guidance for implementing Generative AI-based tools in companies (such as ChatGPT) The team leading the Govertis Emerging Technologies Competence Centre (which is part of Telefónica Tech) has been able to perceive a significant lack of planning and study when deciding and incorporating this type of solutions into the catalogue of corporate tools, so it designed a roadmap as a guide with the steps that should be followed by any entity that is thinking of implementing Chat GPT or other solutions based on Generative AI. This is a decision that cannot be taken in isolation, but must be taken in a considered manner, developing a plan and implementing a governance system that allows for the management of technical, legal and reputational risks. Along these lines, PWC's study The power of AI and generative AI: what boards should know (2023) highlights the importance of developing a board-level approach to AI, precisely because there are risks that must be supervised and managed at the level of corporate strategy. 69% of managers do not feel ready to embrace Generative AI or fully exploit its potential. The Directive on measures to ensure a high common level of cyber security and repealing Directive (EU) 2016/1148, known as the NIS 2 Directive, which updates the deficiencies highlighted by the NIS Directive in the face of the new challenges resulting from the digital transformation of society, although with a specific scope of application. Among its new features, it has included the responsibility of the management bodies of essential and important entities in terms of cybersecurity risk management, having to supervise its implementation and be liable for the failure of the entities to comply with these obligations. It even establishes the obligation for the members of the management bodies of essential and significant institutions to attend training courses on the subject. Therefore, when integrating a Generative AI solution in the company, in our opinion, the following issues should be addressed: 1. Holistic approach and top-level leadership AI is not an issue to be addressed by a particular department or area of the company but must be led at the highest level and addressed on the basis of a corporate strategy that establishes the means to manage the risks arising from its implementation and oversees its management. 2. Implementing the necessary foundations Adopting a generative AI tool necessarily involves a series of prior steps that lay the necessary foundations to implement its use, both at the strategic level of management, technological capabilities that can support such a system, as well as staff training and equipment configuration. Internal communication is very important, so that all company personnel are aware of and participate in the corporate strategy and the rules of use of these tools. 3. Governance system The adoption of a Generative AI tool implies multiple risks, of different natures, which must be managed periodically and systemically. This will involve establishing the necessary human and technological foundations to enable decision-making and the implementation of a governance system to manage all the associated risks. 4. Regulatory compliance There are multiple rules that converge in a governance system, and it is not an easy task to comply efficiently with the regulatory map. This is why clear coordination must be established between the various roles responsible for compliance, such as the Data Protection Officer, Compliance Officer, information security and cybersecurity director or CISO and their respective areas of action. It is worth highlighting at this point the proposal for a European Regulation on Artificial Intelligence that establishes different regulatory requirements depending on the AI system that is adopted. Regarding Generative AI solutions, the current text establishes obligations very similar to those required for high-risk systems. 5. Risks to be managed The main risks that need to be addressed are, without being exhaustive, as follows: Cyber security: With the exponential increase in security vulnerabilities and their sophistication, cyber security is at the forefront of risk management. Data protection: It is essential to establish the necessary bases regarding the information that may be used by the Generative AI system, to guarantee the non-inclusion of personal data. Intellectual and industrial property, trade secrets: In addition to complying with data protection regulations, the company's information is a very important asset that must be adequately protected and must be ensured when using this technology. Legal liability: The use of Generative AI may raise intellectual property issues due to unauthorised use of content protected by intellectual property laws. In fact, Microsoft has recently announced the extension of its AI customer commitments to include intellectual property claims arising from the use of Copilot. Incorrect or biased information: It is essential to train employees in the use of the Generative AI tools made available to them through clear rules of use that specify in which cases they can be used and how to use and interpret their results. Reputational risks: In the event of a problem that has a direct impact on the reputation of the entity, there must be plans in place to manage the crisis, especially at the communication level. There is no doubt about the possibilities offered by Generative AI at the corporate level, so the questions should focus not on whether or not to embrace the technological advance, but on how to approach it. It is therefore essential that the governing bodies of companies lead and supervise the adoption and implementation of this type of tools, developing the appropriate strategy, providing the necessary resources and relying on the advice and supervision of the corresponding internal managers and external professionals as required. Foto de Steve Johnson en Unsplash. IA & Data Responsibility from design applied to AI July 10, 2023
September 20, 2023