The Forest View (TL;DR)
- Trilogue talks collapsed on April 28, 2026. The EU Parliament, Council, and Commission failed to agree on delaying high-risk AI compliance rules — meaning the original August 2, 2026 deadline may stand.
- High-risk AI systems — including those used in hiring, healthcare, and biometrics — face strict new obligations unless a last-minute reform agreement is struck before mid-May.
- US and global companies operating AI products used by EU customers are directly in scope, regardless of where they are headquartered.
A Law That Won’t Wait
The remaining provisions of the EU Artificial Intelligence Act are set to become applicable on August 2, 2026 — and the clock is running.
What was expected to be a relatively smooth legislative update has turned into a regulatory standoff with major consequences for every company building or deploying AI in Europe. As of early May 2026, three EU institutions cannot agree on how to reform the world’s first comprehensive AI law — and the original, stricter version may now be the one that takes effect.
This is not a distant regulatory concern. It is a live crisis with a 90-day countdown.
What Is the EU AI Act? A Quick Primer
The AI Act assigns applications of AI to three risk categories. Applications that create an unacceptable risk are banned. High-risk applications are subject to specific legal requirements. Applications not explicitly banned or listed as high-risk are largely left unregulated.
The law operates on a risk-tiered framework:
- Unacceptable risk → Banned outright (e.g., social scoring systems)
- High risk → Strict compliance required (e.g., hiring tools, biometric systems, critical infrastructure)
- Limited/minimal risk → Light-touch or no rules
The AI Act entered into force on August 1, 2024, and will be fully applicable two years later on August 2, 2026, with some exceptions. Prohibited AI practices and AI literacy obligations entered into application from February 2025. Governance rules and obligations for GPAI models became applicable on August 2, 2025.
The Breaking News: Trilogue Talks Fall Apart
Trilogue negotiations for reforming the EU AI Act under the Digital Omnibus were delayed after the Parliament and Council disagreed on the Act’s overlap with sectoral regulations. Talks will resume next month ahead of a looming compliance deadline in August.
After 12 hours of negotiations that began April 28 and concluded in the early morning hours the next day, the institutions decided to resume talks next month. Should trilogue negotiations remain incomplete, enforcement of high-risk systems starting August 2, 2026 still stands.
The central disagreement? Which products fall under the AI Act versus existing sector-specific safety rules.
A key stumbling block involves high-risk AI systems embedded in products including medical devices and toys. The Parliament pushed for a carve-out, but the Council and Commission did not support these changes.
The political temperature inside those negotiations was high. Dutch MEP Kim van Sparrentak was pointed in her assessment: European companies that care about safety and did their homework now face regulatory chaos.
The Digital Omnibus: What Was Being Proposed
The Digital Omnibus on AI — introduced by the European Commission in November 2025 — was designed to simplify and partially delay the AI Act. The core proposals included:
- Pushing the high-risk AI compliance deadline from August 2, 2026 to December 2, 2027
- Simplifying compliance rules for smaller businesses and industrial sectors
- Clarifying the overlap between the AI Act and product-specific safety laws
The European Parliament had previously voted to delay key compliance deadlines, pushing high-risk AI system requirements to December 2027 and further to August 2028 for sector-specific obligations. In part, the delay had been attributed to pressure from tech companies and the Trump administration.
But that delay has not been confirmed in law. It requires a political agreement in the Council — and that agreement is not yet reached.
Comparison Table: AI Act Compliance Scenarios
| Scenario | High-Risk Deadline | What It Means for You |
|---|---|---|
| Omnibus passes before Aug 2 | December 2, 2027 | More time to prepare; reduced urgency |
| Talks fail; original law stands | August 2, 2026 | Immediate compliance required; fines active |
| Partial agreement reached | Mixed (Annex III vs. Annex I split) | Complex dual-track obligations; legal uncertainty |
Who Is Affected: High-Risk AI Systems Explained
AI systems used in employment-related decisions are classified as high-risk, including tools used for recruitment, candidate selection, performance evaluation, task allocation, monitoring of workers, and decisions on promotion or termination.
Beyond the workplace, high-risk categories also cover:
- Biometric identification systems
- AI used in critical infrastructure (energy, water, transport)
- AI in education and vocational training
- AI in access to essential services (credit scoring, insurance)
By August 2, 2026, organizations should have completed conformity assessments, finalized technical documentation, affixed CE marking, and completed EU database registration for high-risk systems.
What About General-Purpose AI (GPAI) Models?
From August 2, 2026, the Commission’s enforcement powers enter into application. The Commission will enforce compliance with the obligations for providers of GPAI models, including with fines. By August 2, 2027, providers of GPAI models placed on the market before August 2, 2025 must comply.
Models like GPT-4-class systems that exceed 10²⁵ floating point operations (FLOPs) in training are presumed to carry systemic risk and face the most rigorous scrutiny. The Commission’s enforcement actions — such as requests for information, access to models, or model recalls — will only begin on August 2, 2026.
The GPAI Code of Practice, published in July 2025, provides a voluntary compliance pathway. For signatories, the Commission will focus enforcement activities on monitoring adherence to the Code, and may take commitments to the Code into account as mitigating factors when fixing the amount of fines.
Does This Apply to US and UK Companies?
Yes — directly.
The EU AI Act follows a similar jurisdictional model as the GDPR. It applies to providers placing AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country. If a non-EU company sells, licenses or makes an AI product available to EU customers, the respective model could be considered as placed on the EU market.
Companies do not need to have a European office or hire European employees to fall under the coverage of the EU AI Act. Any SaaS platform, hiring tool, or AI-powered service with EU users is potentially in scope.
The compliance choice facing US and UK companies right now is stark: prepare for August 2, 2026 as if the original deadline stands, or wait on a political deal that may not arrive in time.
The Fines Are Real
Non-compliance is not a theoretical risk. Competent authorities may impose administrative fines including up to €35 million or 7% of global annual turnover for infringements relating to prohibited AI practices, up to €15 million or 3% of global annual turnover for infringements of certain other obligations, and up to €7.5 million or 1% for supplying incorrect or misleading information to public authorities.
That’s a GDPR-level penalty structure. Enforcement agencies across all 27 EU member states are building out national competent authorities to apply it.
The Human Root: Jobs, Ethics, and the People in the Machine
The AI Act is not merely a technical compliance exercise. Its most politically charged provisions concern people at work.
When an algorithm decides who gets a job interview, who receives a promotion, or whose contract is terminated, the stakes are deeply human. Organisations that deploy AI in employment-related contexts, including recruitment, performance evaluation, task allocation, worker monitoring, promotion, and termination decisions, should continue their compliance preparations in line with the existing deadline of August 2, 2026.
The argument for the delay — championed by the EPP and German industry groups — centres on competitiveness. The argument against it is simpler: by pushing back rules for high-risk systems while keeping the law non-retroactive, the EU could leave some of the most sensitive AI applications permanently outside its oversight.
A chatbot embedded in a children’s toy. An algorithm screening social care applicants. A system flagging workers for underperformance. These are not abstract edge cases. They are already deployed products, in use today.
The deeper ethical question — who is accountable when an AI system causes harm — remains unanswered if the law keeps slipping. Delays create a window in which AI systems enter markets before the legal architecture to govern them is in place. And because the AI Act is not retroactive, those systems may never be brought under full compliance.
The Verdict
The EU AI Act is not failing. It is being tested by the normal friction of democratic lawmaking at continental scale.
What is genuinely at risk is timing. The window between now and August 2 is narrow. If the May 13 trilogue produces agreement, businesses will gain breathing room. If it does not, the most stringent version of the world’s most ambitious AI regulation takes effect — with full enforcement powers, meaningful fines, and no grace period for those who assumed a delay was coming.
The practical advice is clear: do not plan for the delay. Classify your AI systems now. Complete documentation. Understand whether you fall under Annex I or Annex III. Engage your legal team.
The EU built the world’s first comprehensive AI governance framework. Whether you agree with it or not, it is coming — and August 2 may arrive before the politicians do.
FAQs
The remaining provisions of the EU AI Act are set to become applicable on August 2, 2026. This includes the comprehensive compliance framework for high-risk AI systems — covering areas like employment, biometrics, education, and critical infrastructure. A proposed reform (the Digital Omnibus) sought to push this deadline to December 2027, but as of May 2026, that reform has not been formally adopted.
The AI Act has broad extraterritorial reach. It applies to providers outside the EU that place GPAI models or AI systems on the market in the EU, and to providers and deployers of AI systems outside the EU when the system’s output is used inside the EU. If your product reaches EU users — directly or through resellers — you are likely in scope.
Fines can reach up to €35 million or 7% of global annual turnover for the most serious violations involving prohibited AI practices. Smaller infractions carry penalties of up to 3% of global turnover. These figures are comparable to GDPR enforcement, and national authorities across all EU member states have been designated to apply them.
