The EU's AI Act has been delayed again.
Not because it's going away — but because it's still taking shape. Legislators in Brussels failed to agree on whether parts of the regulation should be pushed back, and timelines, obligations, and scope are still being debated.
The framework remains in motion.
How we got here
The AI Act did not arrive as a single switch-on moment. It took shape gradually — proposed, reworked, politically agreed, brought into force, and now still being implemented in phases.
EU AI Act timeline
2021
Proposal tabled
The European Commission tables the first draft of a horizontal regulation for AI.
2022
Amendments negotiated
Council and Parliament negotiate amendments through the year.
2023
Political agreement
Trilogue lands a provisional deal between the Council, Parliament, and Commission.
2024
Law enters force
The text becomes law, with obligations switched on in stages over the years that follow.
2026
Delay debated
Legislators in Brussels fail to agree on whether parts of the regulation should be pushed back. Timelines and obligations are still being contested.
Ongoing
Phased implementation
Requirements land over time. Standards are still being defined. Guidance is still being written.
A system designed to take shape gradually — not a fixed rulebook handed down on day one.
The latest signal
Recent discussions in Brussels show the same pattern. The disagreement is not about whether the AI Act applies — it is about how and when it can realistically be enforced. The latest round produced requests to extend compliance timelines and proposals to delay high-risk obligations, not because the regulation has gone away but because the machinery around it is not yet ready to operate. Authorities are still being designated. Conformity assessment bodies are not yet in place. Harmonised standards are still emerging.
As one legal analysis put it, the current changes "introduce a degree of uncertainty, whilst at the same time giving the prospect of additional time."
The framework exists. The machinery does not.
The assumption
Most governance programmes are built on a simple idea: the rules stabilise. You define policy, implement controls, align systems, and then operate.
Where that breaks
The AI Act is not a fixed rulebook. It is a moving system. Interpretations evolve. Guidance is layered in over time. Standards determine how compliance is measured. What is compliant today may not be compliant next year — and what passes review next year may not match the conditions that existed when the original decision was made.
The decision
A system produces an outcome. A loan is declined. A transaction is flagged. A customer is scored.
At that moment, a specific policy exists. A specific model version. A specific threshold. The decision is the consequence of all three meeting at one specific time.
At execution
What is in force, right now
- Policy
- A specific version exists
- Model
- A specific version exists
- Threshold
- A specific value exists
Six months later
The regulation has shifted. The interpretation has changed. The policy has been updated. The model has been retrained. The context in which the decision was made no longer exists in the same form.
The question
A regulator asks why did this decision happen — not under today's rules, but under the rules that existed then.
What actually happens
The organisation looks back. It finds logs, events, and fragments spread across systems. The inputs can be recovered. The outcomes can be located. But the reasoning is reconstructed — assembled from what remains rather than observed as it was.
The fragmentation problem
This is not a failure of one system. It is structural.
Across AI governance, the data already exists. Risks are catalogued, incidents are recorded, frameworks are published. But they exist side by side, not as a single coherent artefact. Even recent work to map the AI risk landscape underlines this: datasets can be brought together into a shared interface, but the connections between them still have to be drawn by the reader.
The information is there. The decision is not.
The problem
In a system where compliance evolves over time, reconstruction becomes unreliable. You are not proving what happened — you are interpreting it through the present.
—The shift
A system built for stable policy breaks here. A system built for moving policy assumes this from the start.
The shift
A system built for stable policy breaks here. A system built for moving policy assumes this from the start: policies will change, standards will evolve, interpretations will shift.
So the decision is captured at the moment it is made, not referenced later. The input, the policy, the context, the outcome, the version, and the timestamp are preserved together as they were.
What this enables
When the rules change, the decision does not need to be reinterpreted. It can be verified against the policy that existed at the time.
Not reconstructed. Not approximated.
Proven.
Asked
Was this decision compliant under the rules in force on 14 March?
Where this fits
MeshQu is designed for a world where policy is not fixed. It does not replace your systems — it captures what they cannot retain. Not activity, but evidence.
Closing
The AI Act will not arrive as a single, stable regime. It will continue to evolve through standards, guidance, and enforcement.
The rules will move.
The decision will not.
And if you cannot prove it as it was made, no delay will fix that.
Sources & context
- POLITICOEU legislators fail to clinch deal to delay AI law
- EUROPEAN COMMISSIONAI Act regulatory framework and timeline
- BRUEGELAI Act implementation pressure
- CEPSHow the AI Act evolved through negotiation
- DLA PIPERPhased implementation and compliance preparation
- A&O SHEARMANWhy obligations are still shifting
- CLIFFORD CHANCETimeline uncertainty and the Digital Omnibus
- EURONEWSImplementation delays and enforcement gaps
- OECDRegulatory uncertainty in AI governance
- MIT AI RISK INITIATIVEIntroducing the AI Risk Navigator