Skip to main content

← JournalBlog

Explainability is not proof

By Sam Carter

AI systems are becoming more explainable.

Being explainable is not the same as being provable.


The promise

When an AI model makes a decision, the expectation is simple: explain it.

Tools now exist to answer those questions. Feature importance scores. SHAP values. Local explanations.

The system can describe its own behaviour.


The assumption

AI explainability is often treated as the solution to AI governance.

The argument runs: if we can explain the decision, we can defend it. If we can defend it, we can prove it.


Where it breaks

An explanation describes how a model behaves — it does not prove that a decision was made under the right conditions.


The gap

Take the same loan applicant from the AML and underwriting examples in the companion pieces. She submits at 14:02 on a Tuesday in March. The model declines. SHAP returns:

Local explanation · SHAP

Why the model declined

Income
contributed +0.3
Debt
contributed −0.6
Credit history
contributed −0.4

A weighted breakdown of factors. Useful.

But the breakdown answers a different question. It tells you how the model produced the output. It does not tell you whether the decision should have been made — whether the threshold the model used matched the policy in force at 14:02, whether the analyst's override authority applied, whether the model version running was the one signed off by the model risk committee.


What's missing

A SHAP plot does not capture:

The plot describes behaviour.

The plot does not establish correctness.


The time problem

Even when an explanation exists, it is tied to the present.

Models are retrained. Features are reweighted. Pipelines change.

Six months later, the same SHAP output cannot be reproduced with certainty against the same input. The explanation drifts.


The independence problem

The explanation is generated by the system itself.

The model explains its own output. The infrastructure describes its own behaviour.

This is not independent evidence. What you have is a system narrating its own decisions.


The moment of scrutiny

A regulator does not ask how does your model work?

They ask why did this decision happen — in one specific case, at one specific moment, under one specific policy.


The difference

Explanations are descriptive.

Proof is evidential.


The shift

Explanation describes. Proof evidences.

A provable decision does not rely on explanation after the fact. It is captured at the moment it is made.

Not inferred. Not recalculated. Not approximated.

Recorded once.


What changes

When the question comes — why did this happen? — the answer is not generated. It is retrieved.

Asked

Why did the model decline this applicant on 12 March?

RCP-MODEL-43821 VerifiedResolved in 2.1 seconds

Closing

An explanation helps you understand a model.

It does not help you prove a decision.

When a decision needs to defend itself — in front of a regulator, a court, or a risk committee — understanding is not enough.

You need proof.

Decision Assurance

Explainability describes the model. Proof defends the decision.

See how it works

More from the journal