On July 10, 2025, the European Commission (EC) published the final version of the General-Purpose AI Code of Practice (Code). This voluntary instrument provides guidance on how providers of general-purpose AI models (GPAI), including those posing systemic risks (GPAI-SR), can comply with their obligations under the AI Act, which become applicable on August 2, 2025. The Code is structured around three key areas: transparency, copyright, and safety and security. Adherence to the Code is voluntary, but providers who decide not to comply with it may face heightened scrutiny from regulators, as they will be expected to demonstrate AI Act compliance through alternative means.
Background
The EU Artificial Intelligence Act (AI Act) was adopted last year, and it is becoming applicable in phases. As of February 2, 2025, certain AI practices are prohibited in the EU (as we explain here). From August 2, 2025, all GPAI models released after this date will need to comply with the AI Act’s regime for GPAI models.
The Code is designed to offer practical ways of complying with the obligations for providers of GPAI models, which are set out only at a high level in the AI Act. A multi-stakeholder group of experts drafted the Code under the supervision of the EU AI Office, incorporating feedback from industry, government bodies, civil society, and academia. Although the Code was initially expected by May 2, 2025, it was finalized only on July 10, 2025, just ahead of the August 2, 2025, deadline when the relevant GPAI obligations under the AI Act take effect.
What Is GPAI and GPAI-SR?
GPAI models are models that i) display significant generality, ii) are capable of competently performing a wide range of distinct tasks, and iii) can be integrated into a variety of downstream AI systems or applications. Large-scale generative AI models typically fall within this category. Providers of GPAI models must draw up technical documentation, publish a summary of the data used for training, and implement a policy for compliance with EU copyright rules. GPAI-SR are the most powerful GPAI models and are subject to additional risk management and reporting requirements under the AI Act. To learn more about the AI Act’s rules for GPAI models, please see our AI Act FAQs.
Key Points in the Code
The Code is split into three separate chapters: transparency, copyright, and safety and security:
Next Steps
The Code has not yet been formally adopted. It will be reviewed by the AI Office and the AI Board. If deemed adequate, the EC will endorse it. If approved, the Code will obtain general validity in the EU, meaning that adherence to the Code would be a means to demonstrate compliance with the AI Act. It does not, however, provide a presumption of conformity with the AI Act.
To complement the Code, the EC also published guidelines on key concepts related to GPAI models (see details of the draft guidelines here). The AI Office is also due to publish the template that GPAI providers must use to publish the summary of training data, which they must do in addition to the obligations mentioned above.
The AI Act’s requirements for GPAI and GPAI-SR will start to apply on August 2, 2025. The EC updated its FAQs on the Code, clarifying that companies signing the Code will not be considered to infringe the AI Act if they do not implement the Code’s requirements immediately after signing. The FAQs also mention a one-year grace period for the signatories, meaning that from August 2, 2026, the requirements of the Code will be fully applicable and enforceable with fines, and the AI Office will hold signatories to the standard set out in the Code.
For more information on how to ensure your AI systems and models comply with the EU AI Act, please contact Cédric Burton, Laura De Boel, Yann Padova, or Nikolaos Theodorakis from Wilson Sonsini’s Data, Privacy, and Cybersecurity practice.
Wilson Sonsini’s AI Working Group assists clients with AI-related matters. Please contact Laura De Boel, Maneesha Mithal, Manja Sachet, or Scott McKinney for more information.
Roberto Yunquera Sehwani, Karol Piwonski, and Hattie Watson contributed to the preparation of this alert.