Article 13 Transparency: What the EU AI Act Actually Requires
Article 13 of the EU AI Act mandates 'sufficient transparency' for AI systems. Here is what that means in practice — per-response scoring, deterministic formulas, and exportable evidence.
Article 13 of the EU AI Act (Regulation 2024/1689) requires high-risk AI systems to be "designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately."
That sentence contains three obligations most organisations are not yet meeting.
Three Requirements Embedded in Article 13
1. "Sufficiently transparent" — Not binary. Measurable.
Transparency is not a checkbox. Article 13(1) specifies that high-risk AI systems must include "appropriate information" about the system's capabilities and limitations, "including the level of accuracy, robustness, and cybersecurity."
This means a deployer needs:
- Quantified accuracy metrics — not "high accuracy" but specific numbers with defined measurement methodology
- Known limitations — documented failure modes, edge cases, and conditions under which performance degrades
- Residual risks — what the system cannot do, even when operating correctly
A vendor claiming "our AI is transparent" without publishing production accuracy numbers violates the spirit of Article 13 before the regulation even applies.
2. "Enable deployers to interpret" — The deployer must understand, not just receive.
Article 13(2) states the system must provide output "in a form that is easily understandable." This has two implications:
For system-level transparency: The deployer must understand how the AI system produces its outputs. If the governance scoring is a machine learning black box, the deployer cannot interpret it. They are trusting the tool's claims without the ability to verify them.
For response-level transparency: Each AI output must be interpretable. This means per-response evidence — what sources informed the response, what reasoning produced the conclusion, and what confidence level applies.
Formula-based deterministic scoring satisfies Article 13 because the deployer can inspect the formula, verify the inputs, and reproduce the score. ML-based classification does not — because the deployer cannot explain why the model assigned a particular confidence level.
3. "Use it appropriately" — Transparency serves a purpose.
The goal of transparency under Article 13 is not disclosure for its own sake. It is to enable the deployer to make informed decisions about whether and how to act on AI outputs.
This means transparency must be actionable:
- If a response scores low on compliance, the deployer needs to know which framework and which article triggered the low score
- If evidence quality is insufficient, the deployer needs to see which sources were consulted and which were missing
- If the system's confidence is low, the deployer needs graduated signals — not just "pass" or "fail" but tiered confidence levels with specific thresholds
What "Sufficient Transparency" Looks Like in Practice
Per-Response Scoring
System-level transparency ("our system is compliant") is insufficient. Article 13 requires the deployer to interpret each output. This means each AI response must carry governance metadata:
- Dimension scores — not a single number but disaggregated scores across transparency, reasoning, auditability, compliance, and explainability
- Confidence tier — a clear signal (GREEN, BLUE, YELLOW, RED, GRAY) that maps to organisational response protocols
- Source references — which documents, chunks, and knowledge graph entities informed the response
- Gap attribution — when a score is below threshold, the specific factors that caused the gap (source coverage, assertion density, prior knowledge limitations)
Deterministic Formulas
Article 13's requirement for interpretability implies reproducibility. Given the same inputs, the transparency measurement must produce the same result.
Deterministic scoring means:
- The formula parameters are published (not proprietary weights hidden behind an API)
- An independent auditor can verify any score by re-running the formula with the same inputs
- Changes to the formula are versioned and documented — the deployer knows when and how the methodology changed
This is why TraceGov uses formula-based scoring. Every TRACE score ships with the scoring_parameters that produced it. Any auditor can reproduce the calculation.
Exportable Evidence
Transparency that exists only inside a vendor dashboard is not transparent. Article 13 requires the deployer to have the information — not to have access to a system that has the information.
This means governance evidence must be:
- Exportable in standard formats (PDF reports, structured JSON, CSV audit trails)
- Self-contained — a single export must include the score, the formula, the inputs, and the source references
- Timestamped and tamper-evident — hash verification ensures the evidence has not been modified since creation
Common Mistakes in Article 13 Implementation
Mistake 1: System-level documentation only
Providing a "technical documentation" PDF that describes the AI system in general terms meets Article 11 (technical documentation) but not Article 13 (operational transparency). Article 13 applies to the system in operation, which means per-interaction, per-response evidence.
Mistake 2: Using ML-based scoring for compliance measurement
If the tool measuring your AI's compliance is itself a black box, you have not achieved transparency — you have outsourced the compliance claim to a different opaque system. Article 13 requires the deployer to understand the measurement, not just the result.
Mistake 3: Claiming transparency without publishing numbers
"Our AI system is designed for transparency" is marketing. "Our system scores 63% on transparency, measured by [formula], with evidence including [N source references] and [M knowledge graph entities]" is Article 13 compliance.
The Enforcement Reality
Article 13 transparency obligations apply to high-risk AI systems from August 2, 2026. Non-compliance penalties under Article 99 reach up to EUR 35 million or 7% of global annual turnover.
The EU AI Office has signalled that enforcement will follow GDPR precedent — meaning early, visible enforcement actions against large organisations to establish compliance norms.
Deployers who can demonstrate per-response transparency scoring, deterministic methodology, and exportable evidence will be in the strongest position when regulators make their first inquiries.
How to Start
Three steps for deployers preparing for Article 13:
-
Audit your current AI systems. List every AI system deployed under your authority. For each, ask: "Can I show a regulator how any specific output was produced?"
-
Implement per-response governance. Move from system-level documentation to response-level scoring. Every AI interaction should produce a governance record.
-
Choose deterministic measurement. If you cannot explain to an auditor exactly how a compliance score was calculated, the scoring tool itself fails the transparency test.
TraceGov's TRACE Protocol scores every AI response across five dimensions with a deterministic, formula-based methodology. Every score includes the parameters that produced it. Every audit trail is hash-verified and exportable.
Map your Article 13 obligations in 45 minutes. Start with the Explorer tier — free, EU-hosted, GDPR compliant. Start Tracing — Free →