We Publish Our Real Numbers
Our production system scores 60-67% — not the > 90% from our research paper. Because in regulated AI, your customers need to trust your numbers. Trust starts with telling the truth about your own system.
Research Assets
Papers, benchmarks, and methodologies that underpin TraceGov's approach to AI governance.
TAMR+ v2.3: Trust-Aware Multi-Signal Document Retrieval
Three-stage retrieval pipeline where 65% of scoring comes from structural signals. 14 contributions, 18 patent claims. Production-validated at 60-67% across 4 regulatory domains.
EU-RegQA: Open Benchmark for AI Act Compliance
The first open regulatory AI benchmark: EU-RegQA-100, MedRegQA-50, FinRegQA-50, CrimNet-50. Apache 2.0 licensed. 5 difficulty tiers.
Gap Attribution Taxonomy
Five-category taxonomy (SCG, PKC, DLT, ADG, FSC) that explains WHY a compliance score isn't 100%. Not just a number — a diagnosis.
TRACE Scoring Protocol
Five-pillar formula-based scoring: Transparency, Reasoning, Auditability, Compliance, Explainability. Mapped to 8 EU AI Act articles. Deterministic — no ML black box.
The "Honest Science" Commitment
"We publish our real production scores — 60-67%, not the >90% from our paper. Because in regulated AI, your customers need to trust your numbers. And trust starts with telling the truth about your own system."