As of April 2, 2026, the EU AI Act is in full enforcement. The 24-month transition period for high-risk AI systems has closed. If your API feeds into a system that makes automated decisions affecting individuals — credit scoring, medical triage, recruitment screening, security classification — you are now subject to mandatory conformity assessments, transparency requirements, and ongoing monitoring obligations.
Most API security teams have been watching the EU AI Act as a product problem ("does our AI do anything regulated?") rather than an infrastructure problem. That framing misses the point. The Act creates obligations that run through your entire API layer, not just the model outputs.
What Changed on April 2, 2026
- ▸All high-risk AI systems must maintain technical documentation and conduct conformity assessments before deployment
- ▸Article 22 explainability requirements apply to any automated decision that "significantly affects" an individual
- ▸Human oversight requirements mandate that high-risk systems allow qualified humans to understand, monitor, and override outputs
- ▸Incident reporting obligations: providers must notify the market surveillance authority of serious incidents within 15 days
- ▸Ongoing monitoring: systems must log enough to enable post-hoc review of outputs
What "High Risk" Means in Practice
Annex III of the Act defines high-risk categories. The practical question for API teams: does your system feed into any decision about a natural person in the following domains? Biometric identification, critical infrastructure management, education and vocational training, employment, essential private and public services, law enforcement, migration and asylum, administration of justice.
If your API is behind a healthcare triage bot, a credit application processor, a hiring screening tool, or a content moderation system that determines account status — you are almost certainly in scope.
The Three Infrastructure Requirements That Matter Most
1. Immutable Audit Logs
Article 12 requires that high-risk AI systems log automatically generated events to enable post-market monitoring and investigation of serious incidents. The regulation does not specify a technical implementation but the standard interpretation is that logs must be tamper-evident. A mutable database row does not satisfy this. Hash-chained logs do.
2. Explainability on Automated Decisions
Article 22 requires that individuals subject to significant automated decisions receive "meaningful information about the logic involved." For LLM-based systems this is genuinely hard — you cannot explain a transformer's weights. The practical implementation is to log the inputs, the model version, the system prompt, and the confidence score alongside every decision, and expose that bundle as a queryable record.
3. Human Override Capability
Human oversight is not "a human reviewed the system before deployment." It means qualified individuals must be able to monitor the system in operation, understand its outputs, and intervene — including stopping the system. At the API layer, this means you need a mechanism to halt automated decision pipelines without a code deploy.
What G8KEPR Provides
G8KEPR's Verification Engine attaches explainability metadata to every AI output that passes through the gateway — model version, system prompt hash, confidence score, and the decision path. The audit log is hash-chained. Circuit breakers allow authorized operators to halt processing pipelines via API or dashboard without a deployment. The AI Regulation Tracker monitors all 19 major frameworks including the EU AI Act, GDPR, and NIST AI RMF.
If you are operating a high-risk AI system in the EU and you have not conducted a conformity assessment, you are now in violation. The penalties mirror GDPR: up to 3% of global annual turnover for non-compliance, 6% for providing false information to authorities.
