Codes of conduct in the EU Artificial Intelligence Regulation
Codes of conduct play a prominent role in the regulatory framework for artificial intelligence (AI) at the intersection of law and technology. This article explores the characteristics and content of these codes, highlighting their importance for providers and deployers of AI systems. Definition of codes of conduct Codes of conduct are voluntary self-regulatory mechanisms adopted by entities, institutions, and organizations to evidence regulatory compliance in various sectors. The EU Artificial Intelligence Regulation (AIR) does not provide a definition, but the European Data Protection Board (EDPB) in its Guidelines 1/2019 describes them as voluntary accountability tools [for data protection]. Codes of conduct are self-regulatory mechanisms under the AIR context, whereby non-high-risk AI system providers voluntarily apply mandatory high-risk system requirements, and they and those responsible for the deployment of all AI systems. Whether high-risk or not, they can adopt additional voluntary commitments in areas such as sustainability, design, accessibility and diversity. Features of voluntary codes of conduct Among the main characteristics of voluntary codes of conduct, the following may be mentioned: The IA Office and Member States encourage and facilitate the creation of these codes, especially addressing the needs of SMEs and Start-Ups. They can be developed by individual suppliers, representative organizations or both, and their application is voluntary, allowing the adoption of mandatory requirements for high-risk systems and other specific requirements for those systems that are not. Codes should be based on clear objectives and key performance indicators. They are developed in an inclusive manner, involving business, civic, academic, research, trade unions and consumer advocates. They may encompass one or several AI systems, taking into account the similarity of the intended purpose of the relevant systems. Content of voluntary codes of conduct The content of the codes varies depending on whether they are developed by suppliers of those others that are not high-risk, high-risk AI systems, or by those responsible for deploying these systems: 1. AI System Suppliers other than high-risk ones They can create codes of conduct that include voluntary compliance with some or all of the requirements applicable to high-risk systems. They are also encouraged to include specific additional commitments such as environmental sustainability, design, accessibility, and diversity. 2. High-risk AI system suppliers and deployment managers They may develop codes of conduct that include the voluntary adoption of specific additional commitments. The elements constituting the additional commitments include, but are not limited to, the following, among others: The principles envisaged in the Ethical Guidelines for Trustworthy AI developed by the Independent High-Level Panel of Experts on AI: human oversight, technical safety, privacy, transparency, diversity, and equity. Environmental sustainability. AI literacy. Inclusive and diverse design. Accessibility for people with disabilities and impact assessment on vulnerable groups. High risk AI system suppliers are obliged to comply with the requirements of Section 2, Chapter III of the AIR regardless of whether they mention them or include them for information purposes in their codes of conduct. Assessment and review of codes of conduct The Commission shall, by 2 August 2028 and every three years thereafter, assess the impact and effectiveness of voluntary codes of conduct in promoting the application of the requirements set out in Chapter III, Section 2 to non-high-risk AI schemes and, where appropriate, of additional requirements applicable to AI schemes that are not high-risk AI schemes, such as, for example, requirements relating to environmental sustainability. Comparison with the codes of conduct of the General Data Protection Regulation Criteria and Contents The General Data Protection Regulation[3] (GDPR) requires that codes respond to specific sectoral needs and establish oversight mechanisms, while the AIR promotes the implementation of voluntary requirements that demonstrate transparency and commitment to the regulation. Approval and Monitoring The GDPR requires approval and monitoring by competent authorities, while the AIR codes do not require prior approval. The Commission, while not consisting of monitoring, will have among its functions to periodically assess the impact and effectiveness of the AIR codes of conduct. Sanctions The GDPR provides for corrective measures and sanctions for non-compliance, while the AIR does not provide for specific sanctions, leaving room for the implementation by Member States of the sanctions regime. International Code of Conduct for Advanced AI Systems of the Hiroshima Process The countries that make up the G7 reached consensus on guiding principles and a voluntary code of conduct for advanced AI systems as part of the “Hiroshima Process” on October 30, 2023. The Code aims to promote safe, secure and reliable AI worldwide and seeks to provide voluntary guidance for the actions of organizations developing the most advanced AI systems, including leading-edge basic models and generative AI systems. This document consists of a non-exhaustive list of actions from the OECD AI Principles that are relevant to the requirements of Art. 95 of the AIR and that could complement voluntary codes of conduct that may be developed by vendors and, to the extent appropriate, by those responsible for deployment. Conclusion Voluntary codes of conduct are key to ensuring the responsible development and use of AI. Voluntary codes of conduct provide a flexible framework that enables suppliers and deployers to prove their commitment to transparency, ethics, and sustainability. They build trust with society at large and differentiate themselves in the marketplace.
September 2, 2024