An introduction to the Artificial Intelligence Regulation (AIR)
We are close to an imminent approval of the European Regulation on Artificial Intelligence (AIR), awaiting only the approval of the Council of the EU for its publication in the Official Journal of the European Union. This is an introductory article on the AIR that will be followed by others that we will publish and that will provide further information on certain aspects of the Regulation. It has been almost four years since the proposal for a regulation drawn up by the European Commission in April 2021, which means that the European Union has been clear, from the outset, about the need to establish a new legislative framework to ensure the safety, health, and fundamental rights of individuals in relation to the use of Artificial Intelligence (AI) systems. The AIR is expected to have a global impact, as was the case with the General Data Protection Regulation (GDPR). Govertis, as part of Telefónica Tech dedicated to Governance, Risk and Compliance (GRC) services, has been working for some time on this and, in general, on AI and its application in organizations. AIR combines several techniques to establish this new single regulatory framework: a risk-based approach and the conformity assessment system derived from European product safety regulations. Forbidden AI practices The AIR's classification of AI systems is the embodiment of the risk-based approach. There are therefore unacceptable risks that result in forbidden AI practices, such as: Systems that allow the manipulation of people. Emotion recognition. Biometric categorization systems. "Real-time" remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except under certain circumstances. Systems that enable non-selective image search for facial recognition. Systems to assess or predict the likelihood of a natural person committing a criminal offense. High-risk AI systems High-risk AI systems are secondly classified as those that may have a greater impact on the health, safety, and fundamental rights of individuals. This group includes AI systems that fall within the scope of harmonized product safety legislation and those that belong to one of the eight areas listed in Annex III: Biometrics when permitted by applicable Union or national law. Critical infrastructure. Education and vocational training. Employment, management of workers and access to self-employment. Access to and enjoyment of essential private services and essential public services and benefits. Law enforcement, where its use is permitted by applicable Union or national law. Migration, asylum, and border control management, where its use is permitted by applicable Union or national law. Administration of justice and democratic processes. It is specifically the high-risk AI systems to which the AIR devotes a profuse regulation. This is due to the high risk they are expected to have, which must be mitigated through the adoption of the requirements established in the AIR. High-risk systems requirements The requirements for the high-risk systems themselves are established, such as having a risk management system; the training, validation and test data sets used must meet certain quality criteria; specific technical documentation must be prepared before they are introduced on the market; they must technically allow for the automatic recording of events ("logs") throughout the life of the system; they must comply with a level of transparency that allows implementers to interpret the results of the system and use them appropriately and include instructions for use; human supervision must be established and they must achieve an adequate level of accuracy, robustness and cyber security throughout their life cycle. Suppliers and implementers It also establishes specific obligations for the suppliers of high-risk systems, which must ensure compliance with all the requirements demanded of the system, as well as set up a quality management system that guarantees compliance with the AIR. Meanwhile, the implementers or those responsible for deployment must take appropriate technical and organizational measures to ensure that they use such systems in accordance with the instructions for use provided, carry out human supervision and, in the case of having control over the input data, ensure that they are relevant and sufficiently representative. Transparency obligations As a third point, the AIR establishes specific transparency obligations for certain AI systems, such as indicating that one is interacting with an AI system or for those general-purpose AI systems that generate synthetic audio, image, video or text content, the output information shall be marked in such a way that it is possible to detect that it has been artificially generated or manipulated. General purpose models Finally, the AIR has also established obligations for general-purpose models. The approval of the AIR shows that we are at a turning point in which organizations must establish the necessary bases to integrate with guarantees the use of solutions based on Artificial Intelligence, which implies the design of a strategy and the establishment of the necessary means to implement a governance system led and supervised by the governing or management bodies, with the necessary expert advice. MORE OF THIS SERIES Cyber Security AI of Things The application scope of the Artificial Intelligence Regulation (AIR) April 16, 2024 Cyber Security AI of Things High-Risk AI systems in the Artificial Intelligence Regulation (AIR) May 9, 2024 Cyber Security AI of Things Fundamental rights impact assessments on high-risk AI systems in the RIA May 20, 2024 Cyber Security IA & Data General-purpose AI models in the Artificial Intelligence Act June 24, 2024 AI of Things Codes of conduct in the EU Artificial Intelligence Regulation September 2, 2024 Photo by Cash Macanaya / Unsplash.
April 8, 2024