On April 22, 2025, the EU Commission’s AI Office published draft guidelines to clarify the obligations in the EU AI Act for providers of general-purpose AI models (guidelines). These obligations will be applicable to AI models released in the EU market after August 2, 2025. The guidelines are currently open for public consultation, and the AI Office invited stakeholders to provide feedback by May 22, 2025, using this form. The AI Office plans to adopt a final version of the guidelines before August 2025. Below, we present the main takeaways from the guidelines.
Background
The EU Artificial Intelligence Act (AI Act) was adopted last year, and it is becoming applicable in phases. A ban on certain AI practices already became applicable on February 2, 2025 (as we explain here). The next wave of requirements will be the obligations on general-purpose AI models (GPAI), which will become applicable in August 2025. In addition, the AI Office has begun issuing guidance to clarify certain provisions of the AI Act, including the definition of AI systems and the prohibited AI practices (which we cover here).
The AI Act imposes specific obligations on GPAI models, which are defined as models that i) display significant generality, ii) are capable of competently performing a wide range of distinct tasks, and iii) can be integrated into a variety of downstream AI systems or applications. Large generative AI models typically qualify as GPAI. Providers of GPAI models must draw up technical documentation, publish a summary of the data used for training, and implement a policy for compliance with EU copyright rules. The AI Act also imposes additional requirements for the most powerful GPAI models, considered to be GPAI with “systemic risk” (or GPAI-SR). To learn more about the AI Act’s rules for GPAI models, please see our AI Act FAQs.
The guidelines clarify the AI Act’s provisions on GPAI and GPAI-SR. They complement the voluntary Code of Practice on GPAI (CoP). Companies that develop GPAI models can voluntarily adhere to the CoP to demonstrate compliance with the AI Act. The AI Office has been coordinating the drafting of the CoP. The final version of the CoP is expected to be published in May.
Qualification of AI Model as GPAI
The guidelines provide a rebuttable presumption according to which generative AI models that were trained using compute greater than 10^22 floating point operations (FLOPs) qualify as GPAI. This presumption is rebuttable, meaning that a model whose training compute exceeds 10^22 FLOPs could still avoid the classification as GPAI if there is evidence that it does not display significant generality or that it cannot perform a wide range of distinctive tasks. For example, a model that can only be used for transcription of speech would not qualify as GPAI (narrow task), regardless of its training compute. Conversely, a model may qualify as GPAI even where the training compute threshold is not met, if there is evidence that it displays sufficient generality and capabilities.
The presumption laid down in the guidelines for GPAI models is not mentioned in the AI Act itself. In contrast, the AI Act contains a different presumption for models exceeding the higher threshold of 10^25 FLOPs in compute. Models that exceed this threshold laid down in the AI Act are presumed to qualify as GPAI-SR and are subject to additional obligations (e.g., risk assessment obligations). Please consult the decision tree below which summarizes these different presumptions:

The guidelines recognize two approaches to estimating compute: by tracking Graphics Processing Unit (GPU) usage (hardware-based approach), or by counting the expected number of FLOPs based on the model’s architecture (architecture-based approach). The choice of the appropriate approach is left to the providers. The guidelines indicate that providers should estimate training compute before starting the pre-training run. They should notify the AI Office if the estimated compute exceeds the threshold for classifying the model as GPAI-SR (i.e., 10^25 FLOPs).
Application of GPAI Rules When Fine-Tuning an AI Model
The guidelines clarify the application of the AI Act to the fine-tuning of a model (including a third-party model):
Next Steps
As the date for applicability of GPAI requirements approaches, we recommend that companies that develop or use AI take steps to map out their exposure to AI technologies and how the AI Act could apply to them. As first steps, companies could consider the following:
If a company develops or uses models that qualify as GPAI, we recommend reviewing the guidelines and considering participating in the public consultation to submit any suggestions before the final draft is published.
For more information on how to ensure your AI systems and models comply with the EU AI Act, please contact Cédric Burton, Laura De Boel, Yann Padova, or Nikolaos Theodorakis from Wilson Sonsini’s Data, Privacy, and Cybersecurity practice.
Wilson Sonsini’s AI Working Group assists clients with AI-related matters. Please contact Laura De Boel, Maneesha Mithal, Manja Sachet, or Scott McKinney for more information.
Roberto Yunquera Sehwani and Karol Piwonski contributed to the preparation of this alert.