The European Union has reached a provisional agreement to amend parts of the EU AI Act, introducing simplification measures for businesses while also expanding restrictions on harmful AI applications, including so-called “nudifier” apps and AI-generated child sexual abuse material.
The agreement, reached early Thursday by negotiators from the European Parliament and the Council, forms part of the EU’s broader “digital omnibus” package aimed at refining the implementation of the bloc’s landmark AI legislation.
The updated proposal seeks to reduce compliance burdens and legal uncertainty for AI providers while maintaining the AI Act’s core risk-based framework.
Lawmakers said the changes are designed to make the rules more practical without weakening safeguards tied to safety, privacy, and fundamental rights.
EU AI Act Deadlines Pushed to Reduce Legal Uncertainty
One of the biggest changes under the proposed amendments is the postponement of several obligations linked to high-risk AI systems.
Under the revised timeline, rules for AI systems classified as high-risk due to their use cases will now apply from 2 December 2027. These systems include AI deployed in biometric identification, critical infrastructure, education, employment, law enforcement, and border management.
Meanwhile, AI systems used as safety components under sector-specific EU product safety laws will face compliance obligations from 2 August 2028.
The agreement also delays watermarking obligations for AI-generated content until 2 December 2026. The European Commission had earlier proposed a February 2027 implementation date. Watermarking tools are intended to help identify and trace AI-generated images, audio, and video content.
Lawmakers said the postponements are necessary to ensure technical standards and implementation guidance are fully in place before the rules become enforceable.
EU Bans Nudifier Apps and AI-Generated Abuse Content
A major part of the agreement focuses on tightening restrictions around harmful AI-generated sexual content.
Negotiators agreed to ban AI systems designed to create child sexual abuse material or generate explicit deepfake content involving identifiable individuals without consent. The restriction covers images, video, and audio content.
The EU AI Act ban specifically applies to companies placing such AI systems on the EU market, providers failing to include reasonable safeguards against misuse, and users deploying the systems to create illegal or non-consensual explicit material.
The decision directly targets “nudifier” apps, which use AI to digitally remove clothing or generate fake explicit imagery of individuals.
Companies operating such systems will have until 2 December 2026 to comply with the new requirements.
Michael McNamara, co-rapporteur for the Civil Liberties, Justice and Home Affairs committee, said the agreement strengthens the EU’s ability to act against AI systems that threaten human dignity and fundamental rights.
“I’m pleased that this morning we reached an agreement on the AI Omnibus,” McNamara said. “Alongside simplification measures, we are banning nudification apps, a key part of the Parliament’s mandate, and, of course, the creation of child sexual abuse material using AI systems.”
Simplification Measures for AI Providers and SMEs
The amendments also introduce several simplification measures intended to reduce overlapping compliance requirements for companies developing AI technologies.
Under the new framework, machinery products with AI features will no longer need to comply separately with both the EU AI Act and sector-specific safety laws if existing safety rules already provide equivalent protection.
Lawmakers also narrowed the definition of “safety component” within the EU AI Act. This means AI functions designed only to assist users or improve product performance will not automatically be classified as high-risk unless their failure creates health or safety risks.
Another change allows companies to process personal data where strictly necessary to detect and correct bias in AI systems, provided appropriate safeguards are in place.
The agreement further extends certain exemptions previously available only to small and medium-sized enterprises (SMEs) to small mid-cap companies. EU officials said the move is intended to help startups and growing technology firms scale AI innovation more easily within Europe.
Arba Kokalari, co-rapporteur for the Internal Market and Consumer Protection committee, said the revised rules strike a balance between innovation and regulation.
“With this agreement, we show that politics can move just as quickly as technology,” Kokalari said. “We now make the AI rules more workable in practice, remove overlaps and pause the high-risk requirements.”
Next Steps for the EU AI Act Amendments
The provisional agreement still requires formal approval from both the European Parliament and the Council before it can become law.
EU lawmakers are aiming to finalize adoption before 2 August 2026, which marks the scheduled start date for existing high-risk AI system rules under the original AI Act framework.
The negotiations are part of the EU’s continuing effort to shape global standards around artificial intelligence governance while addressing concerns related to safety, transparency, and misuse of generative AI technologies.








































