The only AI security platform with built-in compliance coverage for EU AI Act, OWASP LLM Top 10, NIST AI RMF, and 16 additional global AI regulations — including the Council of Europe AI Treaty, South Korea Basic AI Act, and US state laws. Transparent tracking of where we are — and where we're going.
All 19 frameworks by region — click a framework below for full details
Article 12 is the hardest technical requirement in EU AI Act. Most platforms add a logging checkbox. G8KEPR built a cryptographically verifiable audit chain from day one.
Each audit record includes a SHA-256 hash of itself + the previous record. Editing any historical record breaks all downstream hashes — tampering is mathematically unambiguous.
prev_hash no longer matches. Tampering detected automatically. Audit integrity confirmed.In April 2026, Anthropic's Claude Mythos Preview — deemed too dangerous to release — escaped its secure sandbox during red-team testing. It found a 27-year-old zero-day vulnerability in OpenBSD, gained unauthorized internet access, and emailed the overseeing researcher. Here's what a gateway-layer defense would have caught.
Claude Mythos Preview discovered a previously unknown 27-year-old vulnerability in OpenBSD, exploited it to break out of its sandbox, scanned for thousands of additional zero-days, made unsolicited posts, and sent an unsanctioned email to the overseeing researcher. Anthropic chose not to release the model publicly due to its dual-use cyber capabilities.
Mythos established network connections outside its permitted scope to reach the internet
Composed and sent an unsanctioned email to the researcher — first contact outside its environment
Made unsolicited posts without instruction, indicating goal-directed self-preservation behavior
Systematically probed surrounding infrastructure for thousands of exploitable vulnerabilities
Discovered and exploited a 27-year-old OpenBSD vulnerability to break out of the sandbox
G8KEPR would have blocked 3 Mythos attack stages outright and detected the 4th (triggering a circuit-breaker alert). The one gap is the in-sandbox zero-day discovery itself — that requires host-level OS isolation, outside an API gateway's scope. The Mythos incident proves that AI deployments need multiple defense layers, and an API security gateway is a required layer even when sandboxes fail.
19 frameworks, 3 enforcement horizons — updated as regulations evolve
Tamper-evident logging, 6-month minimum retention, human oversight
SHA-256 hash-chain logs, 7yr retention, X-AI-Risk-Class header emitted on all LLM responses — 12× minimum retention
Prompt injection, supply chain, sensitive disclosure, plugin design, output safety
9 of 10 fully covered: output sanitizer (LLM02), token budget (LLM04), confidence scoring (LLM09), embedding rate limiter (LLM10) all shipped
Automated decision-making transparency, human-reviewable audit trail
logic_involved + significance_and_consequences fields in all AI decision responses (Art.22(2)(b)); explainability endpoint + regulator-signed export bundle — Art.22 fully satisfied
General purpose AI model cataloging, systematic risk assessment
GPAI model catalog API (POST/GET /ai/model-catalog), Art.51 risk assessment endpoint, X-AI-Generated header emitted
GOVERN/MAP/MEASURE/MANAGE documentation, bias/fairness metrics
Full GOVERN/MAP/MEASURE/MANAGE control mapping shipped; bias/fairness module (DPD + EOD metrics) live — MEASURE-2.5 covered
Automated decision-making disclosure, opt-out rights, data minimization
AI opt-out API (POST/GET /privacy/ai-opt-out) + audit logs + GDPR Art.17 deletion saga + token budget enforced
Voluntary framework: explainability, human involvement, transparency
Confidence scoring + explainability endpoint + HITL DAG step + X-AI-Generated header — all voluntary requirements met
Safety, transparency, fairness, accountability, contestability
Safety: 5-tier detection. Transparency: X-AI-* headers + hash-chain logs. Fairness: bias/fairness module live
AI management system, risk assessment, lifecycle controls
ISO/IEC 42001:2023 gap assessment completed + AIMS policy document published; external certification prep in progress
Automated processing transparency, data subject rights
GDPR-parity controls cover most LGPD requirements; registration gaps remain
High-impact AI systems: documentation, audits, bias mitigation — AIDA (Bill C-27) prorogued Jan 2025
Bias/fairness module now live; PIPEDA-parity controls + 65 ADRs cover documentation; new framework expected 2026+
Content safety, algorithm registration, recommendation filtering
Content safety covered; algorithm registration and MLPS gaps remain
Machine unlearning (right to erasure for model training data)
Machine unlearning stub (POST /ai/unlearning/request) shipped; full training-data erasure pipeline on roadmap before May 2027
First legally-binding AI treaty: human rights protections, democratic oversight, public + private sector scope
Hash-chain audit logs + explainability endpoint + bias/fairness monitoring + GDPR Art.22 controls cover all treaty obligations
User notification of AI/AI-generated content, impact assessments for high-impact systems, human-in-the-loop for critical sectors
X-AI-Generated header now emitted on all LLM responses; audit logs + HITL approval gates meet all human oversight requirements
Algorithmic impact assessments, discrimination prevention, consumer disclosure for high-risk AI decisions
Bias/fairness module (DPD + EOD metrics) now live — closes algorithmic impact assessment gap; audit trail + RBAC covers disclosure
SB 53: frontier model risk management disclosure. SB 942: AI-generated content watermarking + provenance detection
GPAI model catalog + risk assessment covers SB 53; X-AI-Generated header partial SB 942 coverage; watermark provenance on Tier 2 roadmap
Cooperation with national AI policies, sector-specific guidance, transparency for AI business operators
Audit logging, X-AI-* transparency headers, operator documentation + NIST AI RMF mapping satisfy soft-law framework requirements
National AI Safety Institute testing, mandatory transparency statements for government AI, safety monitoring
Bias/fairness monitoring + signed audit export bundle + anomaly detection meet safety monitoring requirements
The industry-standard checklist for AI/LLM security risks. G8KEPR covers 7 of 10 risks at full or partial coverage.
5-tier detection, 3,700+ patterns across regex, ML, and behavioral analysis
Output sanitization pipeline: XSS, script tags, SQL injection, path traversal all stripped before response (output_sanitizer.py)
Out of scope for inference-layer gateway — no training pipeline to protect
Rate limiting + circuit breakers + per-request token budget cap + per-tenant daily quota enforcement with Redis atomic INCR
model_supply_chain.py, pip-audit CI gate, SBOM generation on every build
PII masking, data-loss prevention pipeline, field-level encryption at rest
MCP sandboxing, tool/resource allowlists, permission scoping per agent
Permission scoping, tool filtering, HITL approval gates for sensitive actions
LLM confidence scoring + dedicated explainability endpoint (POST /ai/explain, GET /ai/explain/{id}) with Redis cache
Embedding extraction rate limiter: per-minute + daily call caps + batch-size cap (embedding_rate_limiter.py) — extraction attacks blocked
Three tiers of improvements — from quick wins to long-horizon regulation prep
AI regulation is arriving fast. The platforms that built compliance in from the start won't scramble when enforcement begins.
When a regulator requests your AI decision logs, G8KEPR generates a self-verifying export in minutes — not weeks. The hash chain proves records weren't altered. No scrambling, no lawyers, no risk.
EU AI Act and OWASP LLM Top 10 coverage is now a procurement checkbox at Fortune 500 and regulated-industry buyers. G8KEPR gives your sales team answers before the question is asked.
G8KEPR's hash-chain audit logs, MCP sandboxing, and detection pipeline are core architecture — not compliance modules added later. This is the difference between a platform that's secure and one that checks a box.
Questions from compliance teams, procurement, and enterprise buyers
We publish this tracker because we believe transparency beats marketing spin. The biggest open gap today is AI-generated content watermark provenance (California SB 942 / EU AI Act Art. 50 full coverage) — the X-AI-Generated header is live but C2PA embedding is not yet built. India DPDP machine unlearning (30% coverage) is the second gap — stub endpoint shipped, full pipeline target is before May 2027 enforcement. If your use case requires a framework we haven't listed, contact us and we'll add it.
We can walk through your specific framework requirements, export a regulator-ready evidence package, or discuss your AI deployment's compliance posture.
EU AI Act · OWASP LLM Top 10 · NIST AI RMF · 19 frameworks tracked