From AI adoption to regulation: 10 keys to understanding the impact of the Artificial Intelligence Act (AI Act) on businesses
Laura Vico, Senior Legal Advisor at Govertis, part of Telefónica Tech, and Deputy at the Privacy and Digital Regulation Competence Center, has been certified as a Data Protection Officer since 2019. Passionate about technology and information security, she focuses her work on supporting companies in the implementation of the Artificial Intelligence Act (AI Act), sharing here its key legal and practical insights, from its scope of application to its main obligations, risks, and recommendations.
- What is the first thing an organisation starting to use AI in its processes should know?
- What is the scope of application of the AI Act?
- When will compliance with the AI Act be required?
- What are the consequences of non-compliance with the AI Act?
- What is the purpose of the AI Act?
- What type of professionals should be involved in the adoption of AI in an organisation?
- How does the AI Act classify artificial intelligence systems?
- Is there a specific obligation in the AI Act to assess the protection of fundamental rights?
- If we already have a prior DPIA, can we use it to carry out the EIDF?
- What are the recommendations for organisations that are going to use or develop AI to comply with the AI Act?
1. What is the first thing an organisation starting to use AI in its processes should know?
We are increasingly seeing both public and private entities incorporating Artificial Intelligence (AI) systems into their business processes. This represents a major advance, for example, in automating and streamlining procedures that were previously done manually.
This decision , however, must be supported from the outset by appropriate legal and technical advice, as well as a detailed analysis of the requirements for compliance, primarily with Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, which establishes harmonised rules on artificial intelligence (hereinafter, ‘AIA’), as well as other sectoral and/or national regulations, including regional regulations, that may be applicable to the specific case.
The decision to incorporate AI into an organisation's processes must be supported from the outset by appropriate legal and technical advice.
Among other relevant aspects, this analysis should incorporate a correct understanding of the role played by the entity in applying the AIA, also known as the AI Act. That is, an entity that adopts an AI system in its own processes, this system being developed by a third party, will most likely have to ensure compliance with the obligations assigned by the AIA to those responsible for deployment:
■ The AI Act Recital (13) defines the concept of ‘deployment manager’ as any natural or legal person, including any public authority, body or agency, that uses an AI system under its own authority, except when its use falls within the scope of a personal activity of a non-professional nature.
2. What is the scope of application of the AI Act?
The scope of application of the AI Act is regulated in Article 2 and applies mainly to:
- Suppliers who place AI systems on the market or put them into service, or who place general-purpose AI models on the market in the Union, regardless of whether those suppliers are established or located in the Union or in a third country.
- Those responsible for deploying AI systems who are established or located in the Union.
- Suppliers and those responsible for deploying AI systems who are established or located in a third country, when the output results generated by the AI system are used in the Union.
- Importers and distributors of AI systems.
- Manufacturers of products who place an AI system on the market or put it into service together with their product and under their own name or brand.
- Authorised representatives of suppliers who are not established in the Union.
- Data subjects located in the Union.
The AI Act applies to suppliers who place AI systems or general-purpose models on the market or put them into service in the Union.
3. When will compliance with the AI Act be required?
The Artificial Intelligence Regulation (AI Act) was published on 12 July 2024 and entered into force 20 days after its publication in the Official Journal of the European Union, as set out in Article 113.
However, the AI Regulation will be implemented gradually over time. The following AI Act timeline may be useful as a reference.
4. What are the consequences of non-compliance with the AI Act?
In the event of non-compliance, depending on the specific circumstances, administrative fines of up to €35,000,000 or up to 7% of total worldwide turnover for the previous financial year may be imposed, whichever is higher.
In addition, Article 99 of the AI Act gives Member States the power to establish a system of penalties and other enforcement measures, such as warnings and non-pecuniary measures.
Non-compliance with the AI Act may result in administrative penalties of up to €35 million or 7% of global turnover.
5. What is the purpose of the AI Act?
The purpose of the AI Act is set out in Article 1.1 of the Regulation and is to "improve the functioning of the internal market and promote the adoption of human-centred and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter, including democracy, the rule of law and the protection of the environment, against the harmful effects of AI systems (hereinafter “AI systems”) in the Union, as well as to support innovation."
As can be seen from the definition of the objective of the AI Act, it is based on an anthropocentric vision so that technological advances in the field of AI are always accompanied by the protection of human beings, both in the development of AI itself and in its context and use.
6. What type of professionals should be involved in the adoption of AI in an organisation?
Referring back to the previous question regarding the objective of the AI Act, we must remember that the AI Act itself takes a “broad” view, as it seeks to protect three fundamental pillars:
- Health
- Security
- Fundamental Rights
■ For example: professionals in the field of privacy must bear in mind that the AI Act not only seeks to protect a single fundamental right, such as the right to data protection (Article 18 of the Spanish Constitution and Article 8 of the Charter of Fundamental Rights of the European Union), but also aims to take into account the impact and possible risks to any fundamental right that may be affected.
—Let us imagine a case in which AI is used for job offers and the management of job applications. In this scenario, fundamental rights such as the right to equality and non-discrimination, among others, must be guaranteed.
Cyber security professionals must take into account security in a broad sense and the possible risks and implications for people's health and fundamental rights.
For all these reasons, I consider that having multidisciplinary teams within the company is essential to provide this broad vision of protection and to take into account all the technical and legal aspects. In addition, it would be advisable for all of them to be trained in the most cross-cutting aspects (such as the AI Act itself), so as to allow for greater understanding between the parties, to speak the same ‘language’ and to establish common objectives, such as regulatory compliance.
7. How does the AI Act classify artificial intelligence systems?
The AI Act classifies artificial intelligence systems into four levels of risk, following the approach mentioned above based on the potential impact on fundamental rights, security and human health.
The classification is as follows:
- Unacceptable risk: these are considered particularly harmful and contrary to the values of the European Union and are therefore prohibited by the AI Act itself.
—For example: social scoring systems or systems that enable the recognition of emotions in the workplace and in educational establishments. - High risk: they can significantly affect people's fundamental rights, security, and health. Although they are not prohibited, they are subject to a series of fairly demanding obligations (record keeping, human supervision, technical documentation, etc.).
—For example: AI systems focused on detecting prohibited behaviour during exams. - Limited risk: these are primarily subject to transparency obligations.
—For example: chatbots or synthetic content generators (such as generative AI). - Minimal or no risk: these are not subject to specific restrictions by the AI Act. Consider that virtual assistants or anti-spam filters fall into this category.
Finally, it is worth highlighting general-purpose AI (GPAI) models, which are systems that cover a wide variety of potential uses, whether or not these were originally contemplated by the system's creators.
GPAI models can be further divided into:
- No systemic risk, subject to a series of general obligations.
- With systemic risk, which, in addition to the general obligations, must guarantee enhanced obligations.
8. Is there a specific obligation in the AI Act to assess the protection of fundamental rights?
Yes, Article 27 of the AI Act regulates the Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems, which is required for certain parties responsible for deployment.
This assessment must be carried out prior to deployment and consists of:
- Describing the processes in which the high-risk AI system will be used in accordance with its intended purpose.
At this point, we understand the importance of the context of AI use rather than AI itself, since in order to carry out this assessment properly, the AI system should not be understood in isolation or based solely on the information provided by the supplier as its developer.
—AI must be understood within its context, which is where it will actually have a real impact. - Describe the period of time during which each high-risk AI system is expected to be used and the frequency with which it is expected to be used. Identify the categories of individuals and groups that may be affected by its use in the specific context.
—For example: an AI system used in a medical centre aimed at patients and, specifically, likely to affect minors. - Identify the specific risks of harm that may affect the persons mentioned in the previous point.
- Describe the application of human oversight measures, in accordance with the instructions for use.
- The measures to be taken in the event that such risks materialise, including internal governance arrangements and redress mechanisms.
9. If we already have a prior DPIA, can we use it to carry out the EIDF?
Yes, in fact, Article 27.4 AI Act recognises this when it states that, if any of the obligations set out in Article 27 are already fulfilled by means of the Data Protection Impact Assessment (DPIA) carried out in accordance with Article 35 GDPR, the EIDF will complement that DPIA.
The implication of this statement is that if the DPIA has been carried out in relation to the processing of personal data that coincides with the business process that will be affected by a high-risk AI system, it is likely that we already have defined requirements such as the categories of individuals and groups that may be affected, as well as a description of the process itself. We would therefore already have a basis on which to expand any information that may be pending in order to fully understand an EIDF.
The AI Act should not be understood in isolation; rather, it is important that, as a first step, the applicable legislative framework for each entity be understood.
In this sense, it is natural that the person responsible for deployment coincides in most cases with the person responsible for processing and that, in turn, the provider that develops the AI system in question is considered the processor, insofar as it processes personal data on behalf of the controller.
■ Recital 10 of the AIAD expressly refers to the GDPR and the protection of personal data, and Article 10.5.f of the AIAD itself (in the context of obligations for high-risk AI systems) includes a requirement that records of processing activities must include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data.
10. What are the recommendations for organisations that are going to use or develop AI to comply with the AI Act?
It is recommended to start from the governance framework, drawing on management systems such as ISO 42001, which is the first international standard for the management of artificial intelligence (AI) systems.
It is also important to ensure AI literacy, in accordance with Recital (20) of the AI Act, so that suppliers, those responsible for deployment and the people affected have the necessary knowledge to make informed decisions about AI systems.
Finally, it is important to always maintain a risk-based approach, even if we have no legal obligation to do so as an organisation. This approach will allow us to anticipate the materialisation of risk situations that may arise in this changing environment, which is so exposed to new threats that are increasingly sophisticated and complex.
An organisation that begins to adopt AI systems must understand its role and responsibilities under the Artificial Intelligence Regulation (AI Act) and ensure that it has specialist advice from the outset to comply with the applicable regulations.