AI Biases (VI): Introduction of risks in the life cycle of AI systems (part 2)

April 22, 2025

Following on from the previous article where we explored the initial phases of the AI life cycle, in this second phase we address the phases that begin with the implementation of the AI system: deployment, operation, continuous validation, re-evaluation and, finally, its removal.

Phase 4. Deployment or implementation phase

In this phase, those responsible for deployment are already working with this technology as they move from development to implementation in the production environment.

Implementation bias occurs if the system is implemented in an environment that does not reflect the training conditions, as it may behave in a biased manner. For example, a machine translation system trained mainly with formal texts may not work well with colloquial language. Abstraction traps are also typical of this phase.

Implementation bias occurs if the production environment does not reflect training conditions.

Phase 5. Operation and monitoring phase

At this stage, with the systems in production (operating), constant supervision and adjustments to hardware, software, algorithms, and data are required to maintain optimal performance.

In the case of systems that use continuous learning such as virtual assistants and autonomous vehicles, they learn and update continuously based on user interactions and new experiences. This constant learning can increase the risk of introducing or amplifying biases compared to systems based on predefined rules that do not learn continuously.

Continuous learning may increase bias risk compared to predefined rules systems.

A critical challenge at this stage is the reinforcement feedback loop that occurs when an AI system is retrained with data containing uncorrected biases, perpetuating and amplifying those biases in future decisions, for example, the automation bias that can have a multiplier effect. To this end, continuous feedback mechanisms must be established to identify potential biases and correct them in real-time.

Phase 6. Ongoing validation

'Ongoing validation' consists of regularly evaluating the model with new data to see if it is still accurate.

Therefore, continuous validation can be carried out in AI systems where continuous learning does not apply, for example, to “detect deviations of data, of concepts or to detect any technical malfunction” (ISO/IEC 5338), but it is especially relevant with new data, making it fundamental in continuous learning scenarios where retraining exists even if it is not explicit.

In systems with continuous learning, the models integrate new data continuously without explicit retraining, so it is essential both to check the consistency of the production data with the initial training data and to update the test data itself.

The main biases in this phase are thus those of the data, among which the following should be highlighted: representation, selection, measurement, labelling and proxy, so special focus will have to be placed on measures to manage them in this phase.

The reinforcement feedback loop perpetuates and amplifies biases in future decisions.

Phase 7. Re-evaluation

Unlike monitoring and continuous validation, which refer to constant adjustments, each with the purpose we have seen, re-evaluation is a deeper and more exhaustive process.

Apart from the biases of evaluation and the traps of abstraction that we already know about, and which in these phases can serve to refine the system with decisions, there are several biases specific to this phase: the fallacy of the sunk cost (continuing to invest resources in a past decision because of the investments already made, even though abandoning it would be more beneficial); or the status quo bias (preference for maintaining the current situation, avoiding change even when the alternatives might be more favorable).

Phase 8. Withdrawal

Even if it is decided to withdraw the system, which may be for a variety of reasons (it does not serve its purpose, another solution has been found, it is understood that it is not fair, etc.), this can produce a bias known as historical bias, given that the system has been trained with biased historical data that is replicated.

One example is news recommendation algorithms that may be based on the most relevant news, although they may not be the most truthful or verified. Obviously, it will no longer affect the user of that system, but it will affect other users of the AI system who acquire or use it.

In conclusion, we can see the importance of identifying the biases that can be introduced in the different phases of the AI life cycle with the aim of correcting and mitigating them. In this sense, in each phase different types of biases can appear that will be treated specifically according to the phase and its type.