European Commission Publishes Second Draft Code for Marking AI-Generated Content
Original: Commission publishes second draft of Code of Practice on Marking and Labelling of AI-generated content View original →
The European Commission on March 5, 2026 published the second draft of its Code of Practice on the marking and labelling of AI-generated content. The document is meant to help providers and deployers prepare for the transparency obligations in Article 50 of the AI Act. While the code is voluntary, it is still a strong signal about how compliance expectations are being translated into operational product requirements.
According to the Commission, the new draft incorporates written feedback from hundreds of participants and observers across industry, academia, civil society, member states, and the European Parliament. The updated version is supposed to be more streamlined, more flexible for signatories, and less burdensome from a compliance standpoint. It also promotes the use of open standards and includes illustrative examples of a possible EU icon for labelling.
What changed in the second draft
- The draft is framed as practical guidance for meeting Article 50 AI Act rules on AI-generated content transparency.
- It emphasizes open standards and includes examples of a potential EU icon to make labeling simpler and cheaper for signatories.
- It further defines the treatment of artistic, creative, satirical, and fictional works, as well as text publications under human review or editorial control.
- Feedback on the second draft is open until March 30, 2026, with finalization expected by the beginning of June.
The date that matters most for product teams is August 2, 2026. That is when the transparency rules covering AI-generated content become applicable, according to the Commission. Vendors that generate or distribute synthetic media will likely need to make decisions well before then about visible labels, metadata practices, exception handling, and how those signals appear in user interfaces.
This is why the draft matters beyond Brussels. It pushes AI regulation closer to design-level choices: what gets labeled, when labels are suppressed, which content qualifies for exceptions, and whether interoperability depends on common icons or open standards. For global companies, the practical question may become whether to maintain an EU-specific compliance layer or adapt broader product design to a stricter European baseline.
Source: European Commission publication
Related Articles
OpenAI announced on X that Codex Security has entered research preview. The company positions it as an application security agent that can detect, validate, and patch complex vulnerabilities with more context and less noise.
OpenAI said on X on March 9 that it plans to acquire Promptfoo, an AI security platform, and keep the project open source. The deal strengthens OpenAI Frontier’s agentic testing and evaluation stack.
OpenAI announced $110B in new investment on February 27, 2026, alongside Amazon and NVIDIA partnerships aimed at compute scale. The company tied the move to 900M weekly ChatGPT users, 9M paying business users, and rising Codex demand.
Comments (0)
No comments yet. Be the first to comment!