Skip to main content
RegulationsEU AI Act

The EU AI Act requires AI decisions to be transparent and traceable.

Models produce outputs every minute. Decisions are made on them. When those outcomes are questioned, most organisations cannot reproduce how they were made. That is not transparency.
Art. 12Art. 13Art. 14Art. 15High-risk obligations from 2 Aug 2026
The obligation

This is what the AI Act looks like in your decisions.

Each model output is a decision. Each decision must be reproducible.
// regulatory evidence surface

You have decisions you cannot prove.

You have AI decisions you cannot reproduce.

The Act binds transparency to each output the deployer acts on. This shows where model decisions cannot be replayed.

EU AI ActArticles 12 · 13 · 14 · 15

Transparency, logging, and human oversight of high-risk AI

Updated today
12

model decisions cannot be reproduced

4model versions in flightAcross 6 production surfaces · oldest 211 days
38features without snapshotsDrift detected in 14 of 38
211days · oldest stale lineageFirst flagged 29 Sep 2025

The AI Act regulates the use of the model, not the model card. This is what happens when use can’t be replayed.

See how it works
  • A loan is declinedOutput exists, reasoning lost
  • A claim is auto-settledFeature snapshot gone
  • Content is removedOverride policy unrecoverable

Where this breaks.

  • Inputs + features boundExact values, hashed
  • Model version sealedWeights pinned to the call
  • Override tracedHuman-in-the-loop step recorded

With a Decision Receipt.

Logs capture inputs and outputs. The AI Act asks for the call.

Model versions change. Parameters evolve. Context is lost. When asked to explain or reproduce a decision, systems approximate what happened. The AI Act requires something stronger.

The AI Act attaches to decisions

Transparency is not a report.

It is the ability to show what input was evaluated, which model and version produced the result, what policy or threshold applied, and why that outcome was accepted. Bound together. At the moment of the call.
  • Input evaluatedArt. 12 · 13
  • Model + version produced resultArt. 13 · 15
  • Policy or threshold appliedArt. 13 · 15
  • Outcome acceptedArt. 14 (oversight)

Each decision must be reproducible.

MeshQu mapping

A model output becomes a decision.

A decision produces a Decision Receipt — the input, the model version, the policy or threshold applied, why the outcome was accepted. Captured at execution. Not reconstructed later.
Decision ReceiptAi DecisionDR-K7M9-2P4Q
Verified
Decision
Approved by Risk Committee
Policy
Third-party risk — Tier 1, v7
Evidence
3 attestations, 2 documents
Integrity
sha256:0xdead…beef
Reproducible by design

The same decision can be run again.

Same input. Same model version. Same policy. Same result. This is what reproducibility looks like in practice.
Trust posture

Verifiable without trusting MeshQu.

A receipt can be verified independently. No reliance on internal systems. No dependency on MeshQu. Proof stands on its own.
Questions

The AI Act, in practice

Is MeshQu an AI governance platform?
No. MeshQu is a decision assurance layer. It captures AI-assisted decisions and makes them reproducible.
Does MeshQu explain the model?
MeshQu does not replace model explainability. It records the decision context: input, model version, policy, threshold and outcome.
Can this support human oversight?
Yes. Human review and override decisions can produce the same receipt as automated decisions.
The boundary

If you cannot reproduce these decisions, you cannot demonstrate transparency.

See how it works