Eduardo Oliveros

Eduardo Oliveros

Senior GRC Consultant at Govertis, part of Telefónica Tech.

Cyber Security
AI & Data
General-purpose AI models in the Artificial Intelligence Act
The purpose of an AI system is one of the most important elements under the Artificial Intelligence Act. Thus, as we have seen in other articles in this blog, according to the act the purpose of a system can determine whether it is classified as high risk or whether the use of that system is prohibited. Due to the special relevance that the AI Act gives to the purpose of an AI system, the existence of systems that the AI Act classifies as “general purpose”, i.e. systems based on a general-purpose AI model that can serve a multitude of purposes (including their integration with other downstream AI systems) and without a specific purpose, is striking. AI Act classifies as "general purpose" AI models that can be applied to a wide range of purposes, without being limited to a specific function. If, according to AI Act, the specific purpose of an AI system is one of the main elements in determining the risk presented by that system, what role do systems that do not have a specific purpose, but a general one, have in AI Act? This article seeks to answer this question and examine how the Regulation addresses this specific use of artificial intelligence. General-purpose systems and models As mentioned above, in general terms, a general-purpose AI system is a system that encompasses a wide variety of potential uses, whether or not originally contemplated by the system's creators. Such systems are based on a general-purpose AI model, a type of AI model that is used to adapt to a multitude of downstream purposes and applications, including those for which the model has not been specifically developed and trained. These models may be essential components for an AI system but may not, in any case, constitute an AI system as such. ◾ As an example of these concepts, OpenAI's ChatGPT uses the GPT-4 model, which is a general-purpose AI model. ChatGPT itself would be a general-purpose AI system since it is an AI system based on a general-purpose AI model. Systemic risk: the approach to risk in general-purpose models These general-purpose models do not have a specific purpose, so limiting them to only one category of risk based on a single purpose is practically impossible. Therefore, the AI Act makes a distinction between those general-purpose IA models that pose a systemic risk and those that do not, providing for additional obligations to those that pose a systemic risk. According to the AI Act, systemic risk could be understood as (among others): Actual or reasonably foreseeable adverse effects related to major accidents, disruptions of critical sectors, and serious consequences for public health and safety. Actual or reasonably foreseeable negative effects on democratic processes, public safety and economic security. Dissemination of illegal, false or discriminatory content. Under the AI Act, a general-purpose AI model is considered to present systemic risk when any of the following conditions are met: a) The model has high-impact capabilities, understood as those capabilities that equal or exceed the capabilities registered in the most advanced general-purpose AI models. In this sense, the AI Act presumes that high-impact capabilities exist when the amount of computation used for training is greater than 10^25 FLOPS . ✅ In these cases, the model provider must notify the European Commission within two weeks, although during this period it may present arguments (which may be dismissed by the Commission) to demonstrate that, exceptionally, the model does not present a systemic risk despite having high impact capabilities. b) When the European Commission determines on its own initiative or after receiving a qualified alert that a general-purpose AI model may have high-impact capabilities. ✅ At least six months after the designation of a model as systemic risk by the Commission, the provider may send a reasoned request to the Commission to reassess whether the model should continue to be considered as systemic risk. If the Commission rejects the request, it may not send another request until six months have elapsed. General-purpose AI model providers obligations All providers of general-purpose AI models -whether or not they have systemic risk- must comply with the following obligations: 1. Elaborate and keep updated: Technical documentation of the model -with at least the information detailed in Annex XI of the AI Act- in order to be able to provide it to the IA Office and/or the competent authorities. Information for those IA system providers who want to integrate the model into their systems, containing at least the contents of Annex XII of the AI Act. 2. Establish guidelines to comply with applicable copyright legislation. In particular, they should respect the reservations of rights (opt-out mechanisms) expressed by the owners of copyrighted works. 3. Make available to the public a summary of the content used for the training of the IA model. 4. Cooperate with the Commission and the relevant authorities. Obligations of general-purpose AI model providers with systemic risk In addition to the obligations mentioned above for all general-purpose AI model providers, those model providers with systemic risk shall comply with the following obligations: 1. Submit in the technical documentation of the model provided to the IA Office and to the national competent authorities upon request the following additional information: A detailed description of the assessment strategies and their results. Where appropriate, a detailed description of the measures taken for internal or external adversarial testing (such as the use of red teams) and model adaptations. Where applicable, a detailed description of the system architecture. 2. Evaluate conformance models against standardized protocols and tools that reflect the state of the art. 3. Assess the origin and mitigate potential systemic risks at the Union level that may arise from the development, market introduction or use of general-purpose models with systemic risk. 4. Monitor, document and communicate without undue delay to the IA Office and, where appropriate, to national competent authorities, relevant information on serious incidents and possible remedial actions to resolve them. 5. Ensure that an adequate level of cyber security protection is in place for the systemically risky general-purpose model and the model's physical infrastructure. Codes of best practice for providers of general-purpose AI models It is clear that providers of general-purpose IA models -especially those with systemic risk- have a significant burden of obligations. The AI Act provides for the Commission's IA Office to facilitate the development, review and adaptation of codes of good practice, taking into account the different perspectives of national competent authorities, general purpose IA model providers themselves and other experts in the field, in order to alleviate this burden. AI Act dictates that the Commission's IA Office should support the creation, review and modification of codes of conduct by integrating diverse points of view. Once these codes of best practice have been approved, model providers of these AI models can adhere to them to demonstrate compliance with AI Act obligations. In general, these codes should be finalized no later than 9 months after the entry into force of the AI Act. Conclusion General-purpose IA models are one of the most relevant elements of the AI Act. The risk-based approach that underlies the entire AI Act is also reflected in the regulation of this type of models, as reflected in the distinction of obligations between those models that present systemic risk and those that do not. In any case, adherence to codes of good practice may make it easier for the providers of these models to comply with the obligations established in the Regulation. Cyber Security IA & Data The application scope of the Artificial Intelligence Regulation (AIR) April 16, 2024 Imagen: Benzoix / Freepik.
June 24, 2024
Cyber Security
AI & Data
The application scope of the Artificial Intelligence Regulation (AIR)
In this article we continue the series of posts we started last week where we address different issues related to the Artificial Intelligence Regulation (AIR). After its approval, this regulation will entail the creation of new differentiated legal obligations for the different people involved in the value chain of an AI system. The purpose of this publication will be to examine in detail the framework of application of the AIR, identifying the different roles that the subjects bound by the AIR may have and in which situations the obligations established by this standard are applicable. In general terms, the scope of application of the AIR is quite broad, covering a multitude of subjects (who may be located both inside and outside the EU), but certain exceptions are also foreseen, such as national security issues or to support innovation, respect freedom of science and not undermine research and development activities (Considering 25, AIR.) In general terms the AIR's scope of application is quite broad, but certain exceptions are also foreseen. What roles can the parties bound by the AIR play? In order to properly understand the scope of application of the AIR, we should first identify the different roles that can be adopted by the regulated entities under this Regulation, which may be both individuals and legal entities. Among these subjects, we distinguish: Supplier: Those individuals who develop a general-purpose AI system or AI model for introduction into the EU market under their own name or trademark. As an example, when distributing its AI system ChatGPT, OpenAI acts as a supplier under the AIR. Deployer: Any person who uses an AI system under his own authority in a professional environment. Any company that incorporates an AI system in its processes would thus act as the person responsible for the deployment of such a system. Authorized Representative: Any person located in the EU territory who has been designated - provided that he/she previously accepts such designation - by a supplier located outside the EU to fulfill the obligations and carry out the actions required by the AIR on its behalf. This is, in any case, a concept similar to that included in other European regulations such as the General Data Protection Regulation. Importer: A person established within the territory of the EU who markets or puts into service an AI system under the name or trademark of another person established outside the European territory. Distributor: Those individuals, other than the supplier and the importer, who place an AI system on the EU market. Subsequent supplier: Those suppliers who place on the market or put into service an AI system that incorporates a general-purpose AI model, regardless of whether it is their own or a third party's model. ✅ The reference to an “Operator” is generic, as it constitutes an umbrella term that encompasses, in addition to the terms defined above, also the manufacturer of the AI system or model. In the context of AIR, any of these individuals may be referred to as an “operator”. In which situations does the AIR apply? Taking into account these different roles that people can play in the value chain of an IA system, the Regulation will apply to: Vendors marketing AI systems or AI models for general use in the EU, regardless of whether such vendors are established or located in the EU or in a third country. Those responsible for the deployment of AI systems located in the EU. Suppliers and deployers of AI systems that are located outside the EU, when the output information generated by the AI system is used in the EU. Importers and distributors of AI systems. Manufacturers of products that place on the market or put into service an AI system together with their product and under their own name or trademark. Authorized representatives of suppliers who are not established in the Union. Affected persons who are located in the EU. As we can see, similar to the General Data Protection Regulation, the scope of application of the AIR is extraterritorial in scope as it applies to providers and deployers located outside Europe when they market AI systems or AI models for general use in European territory or when the output generated by their AI systems is used in European territory. Are there any exceptions to the scope of the AIR? The AIR also establishes a few exceptions that provide for specific circumstances in which the AIR obligations will not apply. These exceptions are as follows: The obligations under the AIR do not apply to Member States' national security competences or in the case of AI systems used exclusively for military, defense, or national security purposes. Nor does it apply when outputs from AI systems located outside the EU are used for these purposes. It will also not apply to public authorities of third countries or international organizations using AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or Member States, provided that such authorities or organizations implement adequate safeguards to safeguard the fundamental rights and freedoms of individuals. The AIR will not affect the application of the provisions relating to the liability of intermediary service providers under the Digital Services Regulation. Nor shall it affect AI systems or models, including their outputs, that have been specifically developed and put into service for the sole purpose of scientific research and development. The AIR shall not apply to any activity related to research, development, testing, or activity relating to AI systems or models prior to their being put into service or on the market, with the exception of testing under real-world conditions contemplated in the AIR itself. The AIR obligations shall not apply to those deployers who are natural persons using AI systems for personal, non-professional activities. The AIR obligations shall not apply to free and open-source AI systems, unless they are high-risk, systems prohibited under Chapter II of the AIR, high-risk systems under Chapter III, or are among those under Chapter IV of the AIR (systems that generate ultra-counterfeits, systems for emotional recognition or biometric categorization, chatbots). Free and open source AI systems are not subject to the AIR obligations, except for the exceptions provided. What about other regulatory obligations that may also apply? Since AI systems can serve a multitude of purposes, it is not difficult to imagine that AIR obligations will have to be applied in a harmonized way together with other obligations that may also be applicable, making this joint application of various regulatory frameworks a challenge. The AIR therefore clarifies that data protection, communication secrecy, consumer protection or product safety obligations established in EU regulations will be applicable together with and without prejudice to the obligations established in the AIR. As we can see, if we seek to comply with the obligations set out in the AIR, the question we should first ask ourselves is whether we fall within both the subjective and objective scope of application. However, understanding the entire framework of application of the AIR can be a difficult task, especially considering the wide material and territorial extension of the scope of application of the Regulation and the complexity that this rule itself presents. Cyber Security AI of Things Generative AI as part of business strategy and leadership September 20, 2023 Image by rawpixel.com / Freepik.
April 16, 2024