The EU AI Act's high-risk system requirements are not soft guidelines — they are legally enforceable obligations with fines up to 3% of global annual turnover. If your AI system is deployed in an EU high-risk category, the August 2026 enforcement date is not theoretical. This checklist covers the engineering work, not the legal posture.
High-risk categories include: biometric identification, critical infrastructure, education/vocational training, employment/HR, essential private/public services, law enforcement, migration/asylum/border control, and administration of justice.
Logging and Record-Keeping (Article 12)
- ▸Automatic logging of operations throughout the AI system's lifecycle — not just errors, but all significant decisions
- ▸Logs must be tamper-proof and retained for at least 6 months after each use in production
- ▸For biometric systems: logs must be retained for 6 months minimum, with the capability to extend to 5 years for law enforcement use cases
- ▸Logs must capture: input data characteristics, output decisions with confidence scores, human oversight interventions, and system version at time of decision
Transparency Requirements (Article 13)
- ▸Technical documentation must be created before market placement — not after deployment
- ▸Instructions for use must enable operators to interpret the AI system's output correctly
- ▸Capabilities and limitations must be documented, including foreseeable conditions under which the system may fail or perform with reduced accuracy
- ▸Intended purpose must be precisely defined — general-purpose systems used in high-risk contexts require specific documentation for each use case
Human Oversight (Article 14)
- ▸Override capability: humans must be able to override or halt the system at any time — this must be a technical capability, not just a policy
- ▸Interpretation capability: the system must enable humans to correctly interpret its output — not just display a number, but provide context that enables informed judgment
- ▸Monitoring capability: humans must be able to monitor the system's operation during use
- ▸Attention to automation bias: the system's design must actively counteract tendencies for humans to over-trust or under-monitor automated decisions
Accuracy, Robustness, and Cybersecurity (Article 15)
- ▸Accuracy metrics must be measured, documented, and communicated — not just assumed
- ▸Resilience against errors, faults, and inconsistencies must be demonstrated through testing
- ▸Cybersecurity measures must protect against attempts to alter use, outputs, or performance — this explicitly includes adversarial attacks
- ▸Fallback plans must be documented for when the AI system fails or is under attack
Related reading
NIST AI RMF: Your Implementation Roadmap
How the NIST AI Risk Management Framework maps to EU AI Act requirements and what to implement first.
G8KEPR covers the logging and cybersecurity requirements out of the box
Tamper-proof audit logs, adversarial input detection, and compliance documentation are built into the G8KEPR platform — designed specifically for EU AI Act and SOC 2 alignment.
See compliance features