Decaying

European Commission Publishes Second Draft Code for Marking AI-Generated Content

Original: Commission publishes second draft of Code of Practice on Marking and Labelling of AI-generated content View original →

Read in other languages: 한국어日本語
AI Mar 12, 2026 By Insights AI 2 min read 30 views Source

The European Commission on March 5, 2026 published the second draft of its Code of Practice on the marking and labelling of AI-generated content. The document is meant to help providers and deployers prepare for the transparency obligations in Article 50 of the AI Act. While the code is voluntary, it is still a strong signal about how compliance expectations are being translated into operational product requirements.

According to the Commission, the new draft incorporates written feedback from hundreds of participants and observers across industry, academia, civil society, member states, and the European Parliament. The updated version is supposed to be more streamlined, more flexible for signatories, and less burdensome from a compliance standpoint. It also promotes the use of open standards and includes illustrative examples of a possible EU icon for labelling.

What changed in the second draft

  • The draft is framed as practical guidance for meeting Article 50 AI Act rules on AI-generated content transparency.
  • It emphasizes open standards and includes examples of a potential EU icon to make labeling simpler and cheaper for signatories.
  • It further defines the treatment of artistic, creative, satirical, and fictional works, as well as text publications under human review or editorial control.
  • Feedback on the second draft is open until March 30, 2026, with finalization expected by the beginning of June.

The date that matters most for product teams is August 2, 2026. That is when the transparency rules covering AI-generated content become applicable, according to the Commission. Vendors that generate or distribute synthetic media will likely need to make decisions well before then about visible labels, metadata practices, exception handling, and how those signals appear in user interfaces.

This is why the draft matters beyond Brussels. It pushes AI regulation closer to design-level choices: what gets labeled, when labels are suppressed, which content qualifies for exceptions, and whether interoperability depends on common icons or open standards. For global companies, the practical question may become whether to maintain an EU-specific compliance layer or adapt broader product design to a stricter European baseline.

Source: European Commission publication

Share: Long

Related Articles

AI sources.twitter Mar 28, 2026 2 min read

Databricks posted on March 27, 2026 that its LogSentinel system uses LLMs to classify columns, apply hierarchical and residency-aware labels, and detect drift, with up to 92% precision and 95% recall for PII on 2,258 samples. Databricks documentation says Unity Catalog Data Classification uses an AI agent and LLM to classify and tag tables, while governed tags and ABAC policies translate those tags into consistent access and compliance controls.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.