Introduction 

On the 17th of December 2025, the European Commission released the first draft of the Code of Practice on Labelling and Marking AI-generated content, falling under the scope of article 50(2) and (4).  

This draft represents the initial step of a collaborative process involving hundreds of participants from industry, academia, and civil society, while fostering innovation. A second draft is expected in March 2026. The Code provides a framework for both providers and deployers of AI systems to fulfil the transparency obligation. However, this work, conducted entirely by independent experts, remains distinct from the Commission’s parallel work on developing guidelines on the scope, definitions, and exceptions under Article 50 of the AI Act. The Code aims to provide stakeholders with clear guidance on establishing human-centric, trustworthy AI. Although the first draft is not yet fully detailed, their approach aims to provide stakeholders with a clear sense of the final Code’s potential form and content, while continuing to engage in thorough deliberations on specific, concrete commitments and related measures. For this reason, further commitments and measures may be added, removed, or modified in the future. 

This draft reflects one of the main objectives of the AI Act of restoring trust in the information ecosystem. By requiring machine-readable provenance, the Code provides regulators with a technical foothold for market surveillance while empowering end users with tools to verify authenticity. 

  1.  Rules for marking and detection for providers of generative AI systems. 

The first section of the draft is dedicated to the rules for marking and detection (Article 50(2) and (5) of the AI Act) for providers of generative AI systems. The Code confirms that there is no single active marking technique currently sufficient to meet the legal requirements for effectiveness, interoperability, robustness, and reliability. Therefore, a multi-layered approach is recommended; providers must ensure their technical solutions are “fit-for-purpose”, computationally efficient, and capable of preserving the quality of the generated content. 

Key measures include: 

  • Multi-Layered Marking: when technically feasible, providers must embed digitally signed provenance information and signatures into metadata.
  • Detection tools: providers are committed to offering free-of-charge interfaces or publicly available detectors that enable third parties to verify content, accompanied by confidence scores 
  • Multimodal Synchronization: for outputs combining text, image, or video, marking techniques must be synchronized so that the marks remain recognizable even if only a subset of the modalities is altered.
  • Quality, testing and compliance: providers must ensure that marking solutions meet quality requirements and undergo appropriate testing and verification procedures. 

        2. Rules for labelling of deepfakes and AI-generated and manipulated text applicable to deployers of AI systems (Art. 50 (4) and (5) AI Act) 

While section 1 focuses on the technical “marking” for machines, section 2 addresses the “labelling” required to identify deepfakes and certain AI-generated texts for natural persons. Deployers are responsible for disclosing the artificial origin of content at the first interaction or exposure. 

The Code introduces a common taxonomy and icon to support consistent identification. This taxonomy helps signal the granularity of AI involvement, distinguishing between fully AI-generated and AI-assisted content. Until a permanent EU-wide interactive icon is finalized, an interim icon, typically a two-letter acronym (e.g. “AI”, “IA” or “KI”) should be placed in a visible and consistent location appropriate to the context. 

Specific considerations include: 

  • Artistic and Creative Works: labelling must be implemented in a proportionate manner that does not interfere with the display or enjoyment of the work.
  • AI-Generated Text: disclosure is mandatory for text intended to inform the public on matters of public interest, unless it has undergone human review or editorial control.
  • Accessibility: all disclosures must comply with accessibility requirements, providing alternative text for screen readers or audio cues for visually impaired users. 

 Conclusions and practical implications 

The draft Code of Practice serves as a practical toolbox for navigating the complex AI value chain. It recognises the need for proportionality, offering simplified compliance pathways for SMEs and startups to ensure that transparency does not stifle innovation. Beyond technical requirements, the Code emphasises the importance of training personnel, monitoring mislabelled content, and cooperating with market surveillance authorities. 

As the review process moves forward, stakeholder engagement will be essential to refine commitments and ensure they are practically enforceable. By institutionalising machine-readable provenance and perceptible labelling, the EU is moving toward a more resilient information ecosystem in which the boundaries between human and machine-generated content are transparent and verifiable. 

The Committee welcomes written feedback on this first draft by the Code of Practice Plenary participants and observers by 23 January 2026 (22:00 CET). 

To read the full document, click on the link.  

Share this article!