Skip to main content
GDPR Art. 22: What "Meaningful Information About Logic" Means in Code — G8KEPR Blog
Back to Blog
Compliance7 min readMarch 5, 2026

GDPR Art. 22: What "Meaningful Information About Logic" Means in Code

Article 22 requires that individuals subject to automated decisions receive "meaningful information about the logic involved." For LLM-based systems this is genuinely hard — but it is implementable. Here is the approach that satisfies regulators.

Article 22 of the GDPR grants individuals the right not to be subject to decisions based solely on automated processing when those decisions significantly affect them. When such decisions are made, the data controller must provide "meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing."

For rule-based systems, this was tractable — you could explain decision trees. For LLM-based systems, it is genuinely hard. A transformer's weights cannot be explained in plain language. But the regulation does not require explaining the weights. It requires explaining the logic of the processing.

What Regulators Actually Expect

The Article 29 Working Party (now EDPB) guidance makes clear that "meaningful information" means: the categories of data used, the weightings applied, why those weightings matter, and the likely consequences for the individual. For an LLM, this translates to: which inputs were used (and which were excluded), what objective the model was optimizing for, and what the confidence level was.

The Practical Implementation

For every automated decision made by an AI system subject to Article 22, log and expose: the input features used, the model version and system prompt hash, the output with confidence score, and the decision rule applied to the model output (if any). This bundle is your explainability record.

json
{
  "decision_id": "dec_01HX8K9BWQR",
  "timestamp": "2026-04-15T14:32:18Z",
  "model": "claude-3-5-sonnet-20241022",
  "system_prompt_hash": "sha256:a3f8e2c1...",
  "inputs_used": ["credit_score", "income_band", "loan_term"],
  "inputs_excluded": ["name", "address", "nationality"],
  "output": "DECLINED",
  "confidence": 0.91,
  "decision_rule": "output=DECLINED AND confidence>0.7 → reject",
  "explanation": "Application declined based on credit score below threshold for requested loan term"
}

Machine Unlearning and the Right to Erasure

Article 17 creates a right to erasure. For LLMs trained on personal data, this creates a machine unlearning requirement — the ability to demonstrate that a specific individual's data has been removed from the model's effective knowledge. This is an open research problem for large pre-trained models.

The practical approach for deployed systems: maintain a record of which training data was used, implement a DSAR pipeline that can process erasure requests, and for RAG-based systems (which are far more common in enterprise than fully fine-tuned models), you can actually delete the relevant documents from the retrieval corpus.

G8KEPR's Verification Engine attaches explainability metadata to every AI output automatically, generating the decision record required for GDPR Article 22 compliance without changes to your model or application code.

ShareX / TwitterLinkedIn

Ready to secure your AI stack?

14-day free trial — full platform access, no credit card required. Early access members get pricing locked in forever.