EU Agrees on AI Act Omnibus: High-Risk AI Deadlines Extended to 2027-2028, 'Nudification' Apps Banned
European Parliament and Council negotiators reached a provisional agreement on the Digital Omnibus regulation at 4:30 a.m. on May 7, 2026, closing six months of negotiations. The package amends the EU AI Act primarily by extending compliance deadlines for high-risk AI systems and adding new prohibitions on non-consensual sexual content generation.
Extended Deadlines for High-Risk AI
High-risk stand-alone AI systems now face a compliance deadline of December 2, 2027, and high-risk AI systems embedded in products must comply by August 2, 2028 — delays of roughly 16 and 24 months respectively from the original August 2, 2026 deadline. The rationale: harmonized standards, designated notified bodies, and conformity assessment procedures will not be ready in time.
New Prohibition: Non-Consensual Sexual AI Content
Co-legislators added a new prohibited AI practice covering non-consensual intimate imagery and child sexual abuse material (CSAM) generation. Companies must bring relevant systems into compliance by December 2, 2026 — sooner than most other obligations under the Omnibus.
Other Provisions
The deadline for AI-generated content transparency measures (watermarking) moves from 6 months to 3 months, targeting December 2, 2026. Simplified compliance is extended from SMEs to small mid-caps. The AI Act's risk-based architecture remains intact. Source: EU Council.
Related Articles
The European Parliament and Council agreed on May 7 to simplify the AI Act, pushing high-risk compliance deadlines to December 2027 and August 2028 while adding a new ban on AI-generated non-consensual intimate imagery.
US Government's CAISI to Pre-Test Google, Microsoft and xAI Frontier AI Models Before Public Release
NIST's Center for AI Standards and Innovation (CAISI) announced on May 5, 2026 that it signed pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI, extending its existing framework from OpenAI and Anthropic to all major US frontier AI developers.
The Center for AI Standards and Innovation (CASI) secured agreements with Google DeepMind, Microsoft, and xAI to review frontier AI models for national security risks before launch. The policy shift follows alarm over Anthropic's Claude Mythos autonomous cybersecurity capabilities.
Comments (0)
No comments yet. Be the first to comment!