Fundamental rights impact assessments on high-risk AI systems in the RIA

May 20, 2024

1. Introduction

The Artificial Intelligence Regulation (AIR) states among its purposes "to improve the functioning of the internal market" and "to promote the adoption of human-centered and trustworthy artificial intelligence" and, it adds, that this should be "while ensuring a high level of protection of health, safety and fundamental rights as enshrined in the Charter". This is why the reference to fundamental rights is a constant in the article.

The purpose of this article is to take a high-level approach to one of the obligations that the RIA provides for managing the risks that Artificial Intelligence (AI) may pose to fundamental rights: the Fundamental Rights Impact Assessments on certain high-risk AI systems (also known as Algorithmic Impact Assessments, AIA; or fundamental rights and algorithm impact assessment, FRAIA, the abbreviation we will use).

These are intended to enable the deployer to identify specific risks to the rights of individuals or groups of individuals who may be affected, and to identify measures to be taken in the event that these risks materialize.

We will not address the various intersections with data protection that will be the subject of other articles in this series, some of which also concern FRAIAs and PIAs (Privacy Impact Assessment).

Although the recent emergence of AI has led to the need for the adoption of a regulation at European level (the RIA, of which we have already seen an introduction in the first article of this series dedicated to it, "An introduction to the Regulation on Artificial Intelligence (RIA)"), human rights have existed for some time and there are precedents of impact assessments on human rights (HRIA), social impact assessments (SIA), ethical impact assessments (SIA), as well as on some specific rights, such as the widely known example of the protection of personal data. There have also already been methodologies and tools specifically applied to IA systems, but the RIA has opted for a specific model in its Article 27: FRAIAs.

The RIA is based on a risk-based approach that consists - in short - of: "prohibiting certain unacceptable artificial intelligence practices, establishing requirements for high-risk AI systems and obligations for the relevant operators, and establishing transparency obligations for certain AI systems". The risk management system established for any High-Risk AI solution (Article 9 RIA), requires the provider to have the risks documented, but obviously they may not be grounded to the specific use case of the company deploying them, but at the level of what is reasonably foreseeable and to be updated according to market monitoring.

On the other hand, the FRAIA (Article 27) do seem to be aimed at analyzing the risks to the specific use case, although only in connection with the exercise of public functions and other specific cases that will be indicated later on.

Having made this introduction, let us focus on the FRAIAs.

2. Obliged to carry out FRAIAs

Regarding those obliged to carry it out, the RIA clarifies that this obligation applies to certain specific deployment managers:

  1. On the one hand it must be carried out by Public Law bodies (important to take into account Laws 39 and 40/2015) in respect of all AI systems that are high risk.
  2. On the other hand, private operators providing public services (again important to consider Laws 39 and 40/2015) with respect to AI systems referred to those public services. In fact, Recital 96 gives some examples but that cannot be understood as a numerus clausus.
  3. And on the other hand (regardless of the public or private nature of such entities) and by reason of the purpose of the systems, those responsible for the deployment of high-risk systems referred to in point 5 letters b) and c) of Annex III which are:
    1. AI systems intended to be used to assess the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud. Recital 96 gives "banking or insurance institutions" as examples.
    2. AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

3. When should the FRAIA be performed?

Article 27(1) of the RIA states that the FRAIA must be performed "before deploying a high-risk AI system" and Article 27(2) adds that the FRAIA "applies to the first use of the high-risk AI system".

Article 27, paragraph two adds: "If, during the use of the high-risk AI system, the deployer considers that any of the factors listed in paragraph 1 change or are no longer up to date, the deployer shall take the necessary steps to update the information”.

4. Proposed steps to be taken in a FRAIA

We will now comment briefly on a series of suggested steps to perform a FRAIA which is, in itself, a process that has several phases arranged in a PDCA (acronym referring to the plan-do-check-act cycle) and which in turn can be integrated into the action plan in an artificial intelligence management system; and this, in turn, can and usually will be integrated into other management systems, for example, in an information security management system (such as an ISO 27001 or an ENS). Here, however, we refer to the PDCA itself, which constitutes the FRAIA itself.

Preliminary Phase: A PreFRAIA?

The need to delimit the scope of the possible rights affected by the IA system leads us to the convenience of carrying out a PreFRAIA, similar to how in data protection we have been carrying out prePIAs to, not only delimit the rights affected, but even consider whether the relevance or excessive weight of any of them in the project advises carrying out a specific impact assessment on that matter, separate from the general FRAIA; without prejudice to the fact that in my opinion we should try to try to carry out integrated FRAIAs.

Phase 1: Preliminary analysis of the need for a FRAIA and specification of the systems and rights affected (initial scoping).

First of all, we will have an inventory of the systems within the scope. As for the systems to be subject to FRAIA, they are the high-risk ones.

As for the analysis of the fundamental rights to be focused on, the assessment can be performed on the basis of knowledge of the IA System, the RIA and at the operational level with a check list without much field work and stakeholder involvement.

Phase 2: Context, planning and in-scope detail

Before starting a FRAIA we must have the necessary information about the context of the IA system, determine the team that will carry it out, the methodology and the sources of requirements to be used. Also, although not required by the RIA, given the possibility of conducting a DPA (Data Protection Impact Assessment) in conjunction with a FRAIA, at least a description of the processing of personal data should be available.

Step 3: Necessity, proportionality, and data quality

Unlike high-risk AI systems processing personal data, which entail an analysis on necessity and proportionality, the RIA does not contemplate this obligation in FRAIAs.

However, in my opinion, as has been done by some methodologies, in all high-risk AI systems (whether they process personal data or not) it does require that there is a "moment" in which necessity and proportionality are analyzed, assessing the aspects that have been taken into account to implement the AI system, considering aspects such as: what criteria have been adopted to make the decision to use such high-risk AI systems, why it has been decided precisely that AI and not another, list non-algorithmic alternatives that have been considered, that a prior approximation has been made on the benefits and sacrifices involved (weighting that has great differences between the public and private spheres), etc.

Phase 4. Risk management

Risk management is the central part of any impact assessment and, therefore, also of the FRAIA. Depending on the nature of the risk, safeguards or controls must be adopted, which may incorporate measures to lower the initial risk to the acceptable risk threshold, so that the conclusion to be reached by the FRAIA is whether, given the initial risks, by applying the appropriate measures or controls, the residual risk will be able to be lower than the acceptable risk.

5. Communication to the authority

Once the FRAIA has been carried out "the person responsible for the deployment shall notify the market surveillance authority of the results of the assessment, submitting the completed template referred to in paragraph 5 as part of the notification" (Article 27.3).

6. Possible communication to stakeholders and publication

The RIA does not contemplate communication to stakeholders or publication of FRAIA results, but if the aim is to achieve trust, transparency and, in the case of public sector and citizen participation, perhaps publication and communication to stakeholders should be considered good practice, which could contribute to improving systems such as reducing bias.

As there may be information that either due to business confidentiality, intellectual property or security issues may not want or should not be published, a summarized or suppressed version could be published.

CONTINUING THIS SERIES

The application scope of the Artificial Intelligence Regulation (AIR)
AI practices forbidden in the Artificial Intelligence Regulation (AIR)
High-Risk AI systems in the Artificial Intelligence Regulation (AIR)

Image by DC Studio / Freepik.