Bias in AI (VII): The human factor
As we conclude this series of articles addressing bias in Artificial Intelligence (AI) systems, this final entry explores the risks rooted in human factors—how humans can be both the source of these biases and a key part of the solution.
As discussed throughout the series, human input lies at the core of AI bias, yet paradoxically, the “human element” is also essential for reducing the associated risks.
Below we summarise the main types of human error and the oversight mechanisms that can help mitigate them:
- Programming errors: Developers may introduce unexpected behaviours. Human oversight can involve peer code reviews and thorough testing before systems go live or are deployed.
- Faulty training data: AI systems may learn from incorrect or insufficient datasets, resulting in biased patterns. Expert review can help ensure data quality and adequacy.
- Misinterpretation of results: Poor understanding of AI outputs can lead to incorrect decisions. In these cases, comprehensive training for decision-makers and clear documentation of both processes and outcomes can support better-informed, and therefore improved, decision-making.
Let’s look at a medical diagnosis example:
—An AI system suggests a 75% probability of a disease. If a doctor treats that figure as certainty, they might prescribe an overly aggressive treatment. To prevent this, specialised training and accessible guidance materials should explain what such a probability actually means. This enables doctors to better understand the AI’s output and make more informed decisions. - Human error can also lead to AI systems collecting or using personal data in ways that breach data protection laws. Here, specialist roles such as Data Protection Officers play a vital part in ensuring compliance.
Of course, there are other examples where human error affects AI outcomes—and where human oversight can help manage these impacts. As we’ve already noted, while the human element is often at the root of bias, it can also be instrumental in reducing risk. This can happen in two key ways:
- Through human oversight.
- Via the involvement of relevant stakeholders and professionals.
Human oversight
According to Article 14 of the AI Act, human oversight is mandatory for high-risk systems, and the extent of this oversight must be proportionate to the level of risk, system autonomy, and context.
To that end, the individuals responsible for oversight must be able, “according to the circumstances and proportionately,” to:
a) Understand the system’s capabilities and limitations
b) Avoid automation bias
c) Correctly interpret outputs
d) Override or dismiss erroneous results
e) Halt the system in case of anomalies
In certain cases, this human oversight is reinforced. For instance, critical identification systems, as mentioned in point 1(a) of Annex III of the AI Act, must have two qualified individuals independently validate any AI-based decision.
Now that we've examined the human factor in general, let’s focus specifically on its role in AI bias, considering the stage of the AI lifecycle in which it arises and the corresponding potential solutions—drawing on guidance from bodies such as NIST and the Rhite framework:
1. Pre-design
- Define goals with input from ethics and human rights experts.
- Avoid abstraction pitfalls (e.g., solutionism).
2. Design and development
- Adopt a deliberate and cautious approach in collaboration with experts and users.
- Monitor for construct, labelling, and algorithmic bias.
3. Verification and validation
- Include diverse user groups in usability testing.
- Train developers and users to recognise and mitigate bias.
- Establish continuous feedback mechanisms.
- Use simulations across multiple contexts.
4. Deployment
- Ensure the real-world environment aligns with the training context.
- Monitor for implementation bias.
5. Monitoring and reevaluation
- Counter human biases such as sunk cost fallacy or status quo bias.
-
Perform regular validations.
6. Decommissioning
Even at the decommissioning stage, bias can persist, such as historical or legacy bias, depending on how decision-makers handle system phase-out.
Involving experts and stakeholders
The individuals or groups responsible for decision-making in AI systems—especially during the pre-design and design stages—may have limited perspectives. To reduce this risk, it's essential to include a diverse range of stakeholders, considering aspects such as race, gender, age, and physical ability.
The AIA includes general guidance on this, such as encouraging member states to foster AI development that enhances accessibility, addresses socio-economic inequalities, and supports environmental sustainability. Achieving this requires interdisciplinary cooperation between:
- AI developers.
- Experts in inequality and non-discrimination.
- Accessibility and consumer rights specialists.
- Environmental, digital, and academic. professionals
More concretely, the AIA sets out two specific obligations:
- Risk management (Recital 65): Providers must document the chosen mitigation measures based on the state of the art and include, “where appropriate,” the input of external experts and stakeholders.
- Fundamental Rights Impact Assessments (FRAIA) (Recital 96): Especially relevant for the public sector, this process should involve representatives of potentially affected groups, independent experts, or civil society organisations—both in the assessment and in designing mitigation actions.
Although the final version of the AIA softens some requirements—for instance, no longer mandating that supervisory authorities be notified or that results be published—maintaining these practices remains a good idea to ensure transparency and build trust.
When focusing on expert and specialised roles, some will always be necessary, while others will depend on the specific AI system in question.
Some roles will be subject-matter experts, while others will provide a cross-functional perspective based on their domain knowledge.
Participation intensity will also vary: some roles will be involved throughout the AI lifecycle, while others will only intervene at specific points (e.g. accessibility reviews or usability testing). One strong example of this grounded approach is offered by the FRAIA📍2 framework, which recommends involving not just the project lead (an obvious inclusion) and the domain expert (i.e., the business owner of the AI system), but also the legal advisor across all phases.
In my view, data scientists should also be involved throughout the entire process, as their understanding of AI’s capabilities and limitations is essential for assessing and managing risks to fundamental rights. Ethical advisors also play a growing role—several companies have now appointed ethics officers or committees and adopted additional ethical guidelines, usually aligned with international principles.
There’s no doubt that ethics must be considered, though this is not a formal requirement of the AIA. It will ultimately depend on how broad a scope is defined for impact assessments—whether limited to human rights or extended to ethical dimensions as well.
Another aspect to consider is whether the AI system is for internal use. In these cases, internal roles and departments must be involved, although external advisers may also be consulted. Stakeholder involvement is always necessary, as noted in Recital 64📍3 of the AI Act, echoing Article 35.9 of the GDPR📍4. In the context of AI, and according to ISO/IEC 42001:2023 and ISO/IEC 22989:2022, stakeholders are defined as “a person or organisation that can affect, be affected by, or perceive themselves to be affected by a decision or activity.”
Lastly, if the AI system is developed as a product for customers, the project team should reflect the nature of the service and its users. For example, an AI-based educational assistant (as in the well-known Hello Barbie case) might require input from educational psychologists to ensure the technology responds appropriately to learning environment needs.
Conclusión
In conclusion, this series has shown that beyond the technical sophistication of AI, it is human commitment, diverse perspectives, and expert-user collaboration that ultimately ensure fairer, safer, and more value-aligned systems. Only through transparent governance, responsible oversight, and inclusive participation can we unlock the full potential of artificial intelligence while upholding the rights and ethical standards that define our society.
______
1. ANNEX III. High-risk AI systems under Article 6(2) are those that fall within any of the following areas:
1. Biometrics, insofar as their use is permitted by applicable Union or national law:
a) Remote biometric identification systems.
Excluded are AI systems intended solely for biometric verification purposes, whose only aim is to confirm that a specific natural person is who they claim to be.
2. FRAIA identifies a variety of roles depending on the phase. The roles mentioned include: Interest Group, Management, Citizen panel, CISO or CIO, Communications specialist, Data scientist, Data controller or data source owner, Data protection officer, HR staff member, Domain Expert, Legal Advisor, Algorithm developer, Commissioning client, Project leader, Strategic ethics consultant, and Other project team members.
3. Recital 64a of the AIA states that, when identifying the most appropriate risk management measures, the provider shall document and explain the decisions made and, where appropriate, involve external experts and stakeholders.
4. Article 35.9 of the GDPR: “Where appropriate, the controller shall seek the views of data subjects or their representatives on the intended processing, without prejudice to the protection of public or commercial interests or the security of processing operations.”