On February 4, 2025, the European Commission (EC) issued draft guidelines clarifying the AI practices that are prohibited under the European Union’s (EU) Artificial Intelligence (AI) Act. While non-binding, the guidelines offer valuable clarifications and practical examples to help businesses navigate their obligations under the AI Act. The EC has approved the draft guidelines, but is still to formally adopt them, which is expected in the near term.
Background
On February 2, 2025, the AI Act’s provisions on prohibited AI practices became effective, along with other provisions on AI literacy (see here).
Article 5 of the AI Act prohibits certain AI practices that are considered to raise unacceptable risks, such as AI systems that manipulate or exploit individuals, perform social scoring, or infer individuals’ emotions at the workplace or in education. The ban applies to both companies offering such AI systems, as well as to those using them. The guidelines provide concrete examples of practices that are classified as prohibited, as well as those that are not.
The AI Act may apply to companies based outside the EU if they make an AI system or general-purpose AI (GPAI) model available on the EU market, or if the output generated by the AI system is used in the EU.
Prohibited AI Practices
Below is an overview of the main prohibitions under the AI Act, as interpreted by the guidelines:
Responsibilities for AI Providers
The guidelines stipulate that providers of AI systems (i.e., those who develop AI systems or have them developed and make them available in the EU) are responsible for not releasing a system that is “reasonably likely” to be used for a prohibited purpose, as well as for adopting safeguards to prevent reasonably foreseeable misuse (e.g., via technical safeguards, user controls, restrictions of use).
The EC expects providers to clearly exclude the use of their AI system for prohibited practices in their terms, and to provide clear instructions for use e.g., guidance on appropriate oversight and prohibited practices. In practice, this means that AI providers are expected to anticipate potential uses of their AI systems and implement safeguards accordingly. This applies even when those systems are intended for general use (i.e., systems powered by GPAI models).
Providers must ensure continuous compliance, which includes ongoing monitoring of and updates to AI systems they placed on the market (which, however, should not reach the level of a general monitoring of deployers’ activities). In the event that an AI provider becomes aware—for example, because the system is operated through a provider’s platform—that an AI system has been misused for a prohibited purpose, they are expected to take appropriate measures.
Next Steps
Companies that engage in prohibited AI practices may face significant fines, which can reach up to EUR 35 million or seven percent of their global annual turnover, whichever is higher. The first enforcement actions are expected to kick off in the later half of 2025 as EU countries finalize their enforcement regimes. Companies offering or using AI in the EU should review their AI systems and terms in light of these guidelines and address any compliance gaps in the first half of 2025.
For more information on how to ensure your AI systems and models comply with the EU AI Act, please contact Cédric Burton, Laura De Boel, Yann Padova, or Nikolaos Theodorakis from Wilson Sonsini’s data, privacy, and cybersecurity practice. Please also see our FAQs, 10 Things You Should Know About the EU AI Act, which provides more details on the prohibited uses of AI.
Wilson Sonsini’s AI Working Group assists clients with AI-related matters. Please contact Laura De Boel, Maneesha Mithal, Manja Sachet, or Scott McKinney for more information.
Rossana Fol, Roberto Yunquera Sehwani, and Karol Piwonski contributed to the preparation of this Alert.