TRACEGov
Research

We Publish Our Real Numbers

Our production system scores 60-67% — not the > 90% from our research paper. Because in regulated AI, your customers need to trust your numbers. Trust starts with telling the truth about your own system.

60-67%
Production Accuracy
Real scores, not cherry-picked
$0.005
Cost per Query
2,300x cheaper than alternatives
8/8
EU AI Act Articles
Full deployer coverage
50+
Governance Frameworks
Pre-mapped and cross-referenced

Research Assets

Papers, benchmarks, and methodologies that underpin TraceGov's approach to AI governance.

PaperPublished — SSRN 6359818

TAMR+ v2.3: Trust-Aware Multi-Signal Document Retrieval

Three-stage retrieval pipeline where 65% of scoring comes from structural signals. 14 contributions, 18 patent claims. Production-validated at 60-67% across 4 regulatory domains.

BenchmarkComplete — 250 questions, 4 domains

EU-RegQA: Open Benchmark for AI Act Compliance

The first open regulatory AI benchmark: EU-RegQA-100, MedRegQA-50, FinRegQA-50, CrimNet-50. Apache 2.0 licensed. 5 difficulty tiers.

MethodologyPatent Filed — EP26162901.8

Gap Attribution Taxonomy

Five-category taxonomy (SCG, PKC, DLT, ADG, FSC) that explains WHY a compliance score isn't 100%. Not just a number — a diagnosis.

SpecificationLive in Production

TRACE Scoring Protocol

Five-pillar formula-based scoring: Transparency, Reasoning, Auditability, Compliance, Explainability. Mapped to 8 EU AI Act articles. Deterministic — no ML black box.

The "Honest Science" Commitment

"We publish our real production scores — 60-67%, not the >90% from our paper. Because in regulated AI, your customers need to trust your numbers. And trust starts with telling the truth about your own system."