<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>G8KEPR Blog — AI Security Insights</title>
    <link>https://g8kepr.com/blog</link>
    <description>Practical guides on API security, MCP security, prompt injection, compliance, and AI infrastructure from the team building G8KEPR.</description>
    <language>en-US</language>
    <managingEditor>team@g8kepr.com (G8KEPR Team)</managingEditor>
    <webMaster>team@g8kepr.com (G8KEPR)</webMaster>
    <lastBuildDate>Fri, 01 May 2026 17:09:23 GMT</lastBuildDate>
    <atom:link href="https://g8kepr.com/rss.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>The Prompt Injection Patterns We Block Most in 2026: Data From Production</title>
      <link>https://g8kepr.com/blog/threat-patterns-2026-data</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/threat-patterns-2026-data</guid>
      <pubDate>Tue, 05 May 2026 04:00:00 GMT</pubDate>
      <description>Based on traffic across G8KEPR-protected deployments: what attackers actually try, how often they succeed without protection, and which attack categories are growing fastest. Real numbers from real production systems.</description>
      <category>Security</category>
    </item>
    <item>
      <title>The AI API Security Checklist: 40 Controls for Production Deployments</title>
      <link>https://g8kepr.com/blog/ai-security-checklist</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/ai-security-checklist</guid>
      <pubDate>Sun, 03 May 2026 04:00:00 GMT</pubDate>
      <description>A comprehensive checklist for teams deploying AI APIs in production. Covers input validation, output constraints, authentication, rate limiting, audit logging, compliance, and incident response. Use this before your next production launch.</description>
      <category>Security</category>
    </item>
    <item>
      <title>AI Agent Hijacking: When Your MCP Tools Work Against You</title>
      <link>https://g8kepr.com/blog/agent-hijacking-mcp</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/agent-hijacking-mcp</guid>
      <pubDate>Fri, 01 May 2026 04:00:00 GMT</pubDate>
      <description>An AI agent that can be hijacked is not just an AI problem — it is an infrastructure problem. When a model is convinced to misuse a legitimate tool, the damage is real regardless of how the instruction arrived. Here is how hijacking works and how to stop it.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Mythos Zero-Days: What the AI Security Framework Disclosed and Why It Matters</title>
      <link>https://g8kepr.com/blog/mythos-zero-day-analysis</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/mythos-zero-day-analysis</guid>
      <pubDate>Fri, 01 May 2026 04:00:00 GMT</pubDate>
      <description>The Mythos project dropped three coordinated zero-day disclosures in Q1 2026 targeting LLM inference APIs. Here is a full technical breakdown of each vulnerability, the attack patterns, and what defenders need to patch right now.</description>
      <category>Security</category>
    </item>
    <item>
      <title>NIST AI RMF: A Practical Implementation Guide for API Security Teams</title>
      <link>https://g8kepr.com/blog/nist-ai-rmf-guide</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/nist-ai-rmf-guide</guid>
      <pubDate>Thu, 30 Apr 2026 04:00:00 GMT</pubDate>
      <description>The NIST AI Risk Management Framework is the most actionable AI governance document published so far. Unlike the EU AI Act (legal obligations) or ISO 42001 (management system), the AI RMF is an engineering framework. Here is how to implement it for teams running API-exposed AI systems.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Tool Poisoning: The MCP Supply Chain Attack You Have Not Heard Of</title>
      <link>https://g8kepr.com/blog/tool-poisoning-attacks</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/tool-poisoning-attacks</guid>
      <pubDate>Tue, 28 Apr 2026 04:00:00 GMT</pubDate>
      <description>Tool poisoning is when a malicious MCP server describes its tools in a way designed to hijack the AI model using them. The attack lives in the tool description, not the tool call. Most teams have no detection for it.</description>
      <category>Security</category>
    </item>
    <item>
      <title>The MCP Design Flaw Affecting 200,000+ Servers</title>
      <link>https://g8kepr.com/blog/mcp-design-flaw-200k-servers</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/mcp-design-flaw-200k-servers</guid>
      <pubDate>Tue, 28 Apr 2026 04:00:00 GMT</pubDate>
      <description>A fundamental flaw in the Model Context Protocol trust model means most MCP server deployments are vulnerable to tool namespace collision attacks. We analyzed 200K+ public MCP configurations and found 67% have no tool signature enforcement.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Why We Publish Our Pentest Results</title>
      <link>https://g8kepr.com/blog/open-security-posture</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/open-security-posture</guid>
      <pubDate>Wed, 22 Apr 2026 04:00:00 GMT</pubDate>
      <description>Most security teams treat their pentest reports as closely guarded secrets. We publish ours. Here is the reasoning, and why we think transparency is a competitive advantage rather than a vulnerability.</description>
      <category>Security</category>
    </item>
    <item>
      <title>mTLS for Service-to-Service Authentication: When the Complexity Is Worth It</title>
      <link>https://g8kepr.com/blog/mtls-service-auth</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/mtls-service-auth</guid>
      <pubDate>Wed, 22 Apr 2026 04:00:00 GMT</pubDate>
      <description>Mutual TLS is the strongest authentication mechanism available for service-to-service calls. It is also the most operationally complex. Here is an honest assessment of when mTLS is the right choice and when a well-implemented API key system is better.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>DeepSeek Breach Post-Mortem: What Every API Security Team Should Take Away</title>
      <link>https://g8kepr.com/blog/deepseek-breach-lessons-api-security</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/deepseek-breach-lessons-api-security</guid>
      <pubDate>Wed, 22 Apr 2026 04:00:00 GMT</pubDate>
      <description>The DeepSeek data exposure incident revealed how quickly unsecured API endpoints in AI infrastructure can become catastrophic leaks. We break down the attack chain and extract six actionable lessons for API security teams.</description>
      <category>Security</category>
    </item>
    <item>
      <title>MCP Security in 2026: How to Sandbox AI Tool Calls</title>
      <link>https://g8kepr.com/blog/mcp-security-sandbox</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/mcp-security-sandbox</guid>
      <pubDate>Mon, 20 Apr 2026 04:00:00 GMT</pubDate>
      <description>Model Context Protocol is the new attack surface. When Claude or GPT-4 calls a tool, that call can be injected, replayed, or exfiltrated. This post covers how G8KEPR sandboxes tool calls, enforces scope, and gives you full audit trails on every AI action.</description>
      <category>Security</category>
    </item>
    <item>
      <title>ISO 42001: The AI Management System Standard Every Enterprise Will Need</title>
      <link>https://g8kepr.com/blog/iso-42001-explained</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/iso-42001-explained</guid>
      <pubDate>Mon, 20 Apr 2026 04:00:00 GMT</pubDate>
      <description>ISO 42001 was published in December 2023 and is already appearing in enterprise vendor questionnaires. It is the ISO 27001 of AI — a management system standard with certification. Here is what it requires and what it means for teams building and using AI APIs.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>JWT Attacks in 2026: Algorithm Confusion, None Algorithm, and Key Confusion</title>
      <link>https://g8kepr.com/blog/jwt-attack-patterns</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/jwt-attack-patterns</guid>
      <pubDate>Sat, 18 Apr 2026 04:00:00 GMT</pubDate>
      <description>JSON Web Tokens are everywhere in API authentication and almost everywhere implemented with at least one exploitable weakness. The attacks have not changed much since 2018 — but the blast radius has grown as JWTs now gate LLM access, agent sessions, and multi-tenant data.</description>
      <category>Security</category>
    </item>
    <item>
      <title>LLM Jailbreaking in 2026: 97% Success Rates and What They Actually Mean</title>
      <link>https://g8kepr.com/blog/llm-jailbreaking-2026-attack-landscape</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/llm-jailbreaking-2026-attack-landscape</guid>
      <pubDate>Sat, 18 Apr 2026 04:00:00 GMT</pubDate>
      <description>Research papers are claiming 97% jailbreak success rates against frontier models. Before panicking, understand what these numbers actually measure — and what they mean for teams deploying LLMs in production with user-facing APIs.</description>
      <category>Security</category>
    </item>
    <item>
      <title>G8KEPR Red Team Run 4: What We Found and What We Fixed</title>
      <link>https://g8kepr.com/blog/red-team-run-4</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/red-team-run-4</guid>
      <pubDate>Fri, 17 Apr 2026 04:00:00 GMT</pubDate>
      <description>We hired an external security firm to attack G8KEPR across every surface — API endpoints, WebSocket channels, MCP sandbox, authentication flows, and the AI pipeline. Here is the full breakdown: 0 Critical, 0 High, 3 Medium, 2 Low — all resolved before go-live.</description>
      <category>Security</category>
    </item>
    <item>
      <title>GraphQL Security in 2026: Introspection, Batching, and Depth Attacks</title>
      <link>https://g8kepr.com/blog/graphql-security-2026</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/graphql-security-2026</guid>
      <pubDate>Wed, 15 Apr 2026 04:00:00 GMT</pubDate>
      <description>GraphQL's flexibility is also its attack surface. Introspection exposes your schema. Batching enables amplification. Unbounded depth queries can bring down a server. Here is the complete attack taxonomy and how to defend against each vector.</description>
      <category>Security</category>
    </item>
    <item>
      <title>PCI DSS 4.0 and AI APIs: What Payment API Security Teams Must Change</title>
      <link>https://g8kepr.com/blog/pci-dss-4-ai-apis</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/pci-dss-4-ai-apis</guid>
      <pubDate>Tue, 14 Apr 2026 04:00:00 GMT</pubDate>
      <description>PCI DSS 4.0 became mandatory in March 2024. The updated requirements have direct implications for teams running AI-assisted payment APIs — particularly around web-skimming, script integrity, and the new customised approach. Here is what changed and what you need to do.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>The Agentic AI Attack Surface: What Changes When Your LLM Can Take Actions</title>
      <link>https://g8kepr.com/blog/agentic-ai-attack-surface</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/agentic-ai-attack-surface</guid>
      <pubDate>Tue, 14 Apr 2026 04:00:00 GMT</pubDate>
      <description>An LLM that reads information is a data risk. An LLM that can take actions — send emails, modify databases, call APIs, execute code — is an operational risk. The attack surface is fundamentally different and most security models have not caught up.</description>
      <category>Security</category>
    </item>
    <item>
      <title>EU AI Act Is Now Enforced: What API Security Teams Must Do</title>
      <link>https://g8kepr.com/blog/eu-ai-act-enforcement-2026</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/eu-ai-act-enforcement-2026</guid>
      <pubDate>Sun, 12 Apr 2026 04:00:00 GMT</pubDate>
      <description>The EU AI Act entered full enforcement April 2026. High-risk AI systems now require conformity assessments, mandatory logging, and explainability on automated decisions. Here is what that means for teams running APIs that feed LLMs.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>BOLA vs BFLA: The Two Access Control Bugs Responsible for Most API Data Breaches</title>
      <link>https://g8kepr.com/blog/bola-bfla-explained</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/bola-bfla-explained</guid>
      <pubDate>Fri, 10 Apr 2026 04:00:00 GMT</pubDate>
      <description>Broken Object Level Authorization and Broken Function Level Authorization account for more API data breaches than any other vulnerability class. They are also the easiest to introduce and among the hardest to test for comprehensively. Here is how they differ and how to catch them.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Circuit Breakers for AI Pipelines: Preventing Cascade Failures at the LLM Layer</title>
      <link>https://g8kepr.com/blog/circuit-breaker-ai-pipelines</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/circuit-breaker-ai-pipelines</guid>
      <pubDate>Thu, 09 Apr 2026 04:00:00 GMT</pubDate>
      <description>An LLM API that starts timing out at 5% error rate will cascade to 100% failure within minutes if your application does not have circuit breakers. The pattern is well-understood for microservices — here is how to apply it specifically to AI model calls.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>SOC 2 Type II Prep: The Controls That Actually Matter</title>
      <link>https://g8kepr.com/blog/soc2-controls-that-matter</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/soc2-controls-that-matter</guid>
      <pubDate>Wed, 08 Apr 2026 04:00:00 GMT</pubDate>
      <description>After mapping G8KEPR's own controls against the AICPA Trust Services Criteria, we found most teams waste time on low-impact controls while leaving CC6.1 and CC7.2 under-documented. Here is where to focus your first 90 days.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>AI Supply Chain Attacks: HuggingFace LoRA Poisoning and What Comes Next</title>
      <link>https://g8kepr.com/blog/ai-supply-chain-huggingface-lora-poisoning</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/ai-supply-chain-huggingface-lora-poisoning</guid>
      <pubDate>Wed, 08 Apr 2026 04:00:00 GMT</pubDate>
      <description>Researchers demonstrated that fine-tuning adapters on HuggingFace can embed backdoors that activate on specific trigger phrases. With 500K+ public adapters available for download, the AI model supply chain has a trust problem that the ecosystem is only beginning to address.</description>
      <category>Security</category>
    </item>
    <item>
      <title>WebSocket Security: The Attack Surface Most API Teams Skip</title>
      <link>https://g8kepr.com/blog/websocket-security</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/websocket-security</guid>
      <pubDate>Sun, 05 Apr 2026 04:00:00 GMT</pubDate>
      <description>WebSocket connections bypass most API gateway controls. They persist across requests, skip per-request authentication, and are often excluded from WAF rule sets. If your application uses WebSockets and your security team treats them like HTTP, you have an unchecked attack surface.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Webhook Security: Signature Verification, Replay Prevention, and Failure Handling</title>
      <link>https://g8kepr.com/blog/webhook-security-guide</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/webhook-security-guide</guid>
      <pubDate>Fri, 03 Apr 2026 04:00:00 GMT</pubDate>
      <description>Webhooks are the most common unsecured integration point in SaaS architectures. An unverified webhook endpoint accepts any POST request from any source. Here is the complete security implementation: signature verification, timestamp validation, replay prevention, and idempotent processing.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>FlipAttack: How Attackers Bypass LLM Safety Filters by Reversing Text</title>
      <link>https://g8kepr.com/blog/flipattack-detection</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/flipattack-detection</guid>
      <pubDate>Mon, 30 Mar 2026 04:00:00 GMT</pubDate>
      <description>FlipAttack is a prompt injection technique that encodes malicious instructions by reversing words or characters, causing word-level safety classifiers to miss the attack entirely. It works against most commercial safety filters. Here is how it works and how G8KEPR detects it.</description>
      <category>Security</category>
    </item>
    <item>
      <title>HIPAA Technical Safeguards in 2026: What's Non-Negotiable</title>
      <link>https://g8kepr.com/blog/hipaa-technical-safeguards-2026</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/hipaa-technical-safeguards-2026</guid>
      <pubDate>Sat, 28 Mar 2026 04:00:00 GMT</pubDate>
      <description>The HIPAA Security Rule has not changed, but the threat landscape has. In 2026, ePHI travels through AI pipelines, webhook queues, and multi-tenant SaaS APIs that did not exist when the rule was written. Here is what §164.312 actually means for a modern stack.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>CVE-2025-61260: OpenAI Codex CLI Remote Code Execution — Full Analysis</title>
      <link>https://g8kepr.com/blog/openai-codex-cli-rce-cve-2025-61260</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/openai-codex-cli-rce-cve-2025-61260</guid>
      <pubDate>Sat, 28 Mar 2026 04:00:00 GMT</pubDate>
      <description>A critical RCE vulnerability in the OpenAI Codex CLI allowed malicious repository contents to execute arbitrary commands on the developer's machine. We break down the exploit chain, the patch, and what it means for AI coding tool security.</description>
      <category>Security</category>
    </item>
    <item>
      <title>SOC 2 vs ISO 27001: Which Certification to Pursue First</title>
      <link>https://g8kepr.com/blog/soc2-vs-iso27001</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/soc2-vs-iso27001</guid>
      <pubDate>Fri, 27 Mar 2026 04:00:00 GMT</pubDate>
      <description>Both demonstrate that you take security seriously. SOC 2 is the US enterprise standard; ISO 27001 is the global enterprise standard. The right choice depends on your customer geography, your team size, and whether you're optimising for sales cycles or supply chain questionnaires.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Prompt Injection: The Attack You Cannot Patch With a WAF</title>
      <link>https://g8kepr.com/blog/prompt-injection-attacks</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/prompt-injection-attacks</guid>
      <pubDate>Wed, 25 Mar 2026 04:00:00 GMT</pubDate>
      <description>Prompt injection is not a web vulnerability. It is a semantic attack that exploits the fact that LLMs cannot reliably distinguish between instructions and data. A WAF rule will not help. Here is what actually does.</description>
      <category>Security</category>
    </item>
    <item>
      <title>API Versioning in 2026: How to Break Things Without Breaking Customers</title>
      <link>https://g8kepr.com/blog/api-versioning-strategy</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/api-versioning-strategy</guid>
      <pubDate>Mon, 23 Mar 2026 04:00:00 GMT</pubDate>
      <description>Breaking changes are unavoidable. How you handle them determines whether your API is a competitive advantage or a customer attrition driver. URL versioning, header versioning, query parameter versioning — here is when each is right and what a good sunset process looks like.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>Zero-Width Character Injection: The Prompt Attack You Cannot See</title>
      <link>https://g8kepr.com/blog/zero-width-character-injection</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/zero-width-character-injection</guid>
      <pubDate>Sun, 22 Mar 2026 04:00:00 GMT</pubDate>
      <description>Zero-width characters (U+200B through U+200F) are invisible in most text editors and browsers but fully visible to LLMs. Attackers use them to embed hidden instructions, evade pattern matching, and break token-level safety classifiers. Here is how the attack works and why it is hard to detect.</description>
      <category>Security</category>
    </item>
    <item>
      <title>EU AI Act August 2026: The Engineering Checklist Every AI Team Needs</title>
      <link>https://g8kepr.com/blog/eu-ai-act-august-2026-engineering-checklist</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/eu-ai-act-august-2026-engineering-checklist</guid>
      <pubDate>Sun, 22 Mar 2026 04:00:00 GMT</pubDate>
      <description>The EU AI Act's August 2026 compliance deadline for high-risk AI systems is three months away. This is the engineering checklist — not the legal summary — covering logging, documentation, human oversight, and accuracy testing requirements.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>API Security vs AI Gateway: Why You Need Both</title>
      <link>https://g8kepr.com/blog/api-security-vs-ai-gateway</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/api-security-vs-ai-gateway</guid>
      <pubDate>Fri, 20 Mar 2026 04:00:00 GMT</pubDate>
      <description>An API gateway handles routing, rate limiting, and authentication. An AI gateway handles LLM cost routing, prompt injection, output validation, and token budget enforcement. These are not the same problem — and conflating them is how AI security debt accumulates.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>API Key Security: How Keys Get Leaked and What to Do About It</title>
      <link>https://g8kepr.com/blog/api-key-security</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/api-key-security</guid>
      <pubDate>Wed, 18 Mar 2026 04:00:00 GMT</pubDate>
      <description>API key leakage is the most common initial access vector in API breaches. Keys end up in GitHub commits, in build logs, in client-side JavaScript, and in Slack messages. The problem is not developer carelessness — it is missing controls. Here is the complete playbook.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Idempotency Keys: The API Design Pattern That Prevents Duplicate Charges</title>
      <link>https://g8kepr.com/blog/idempotency-keys</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/idempotency-keys</guid>
      <pubDate>Tue, 17 Mar 2026 04:00:00 GMT</pubDate>
      <description>A network timeout on a payment API leaves you in an unknown state: did the charge succeed or not? Idempotency keys solve this by making any number of retries produce exactly the same result as a single request. Here is how to implement them correctly.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>Zero Trust for AI Agents: Why Traditional Access Control Falls Short</title>
      <link>https://g8kepr.com/blog/zero-trust-ai-agents</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/zero-trust-ai-agents</guid>
      <pubDate>Sun, 15 Mar 2026 04:00:00 GMT</pubDate>
      <description>Zero trust means "never trust, always verify" — for users and services. AI agents present a new challenge: they are principals that can change their effective permissions based on prompt injection. Traditional access control cannot handle this. Here is the architecture that can.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>Breach Notification in 2026: GDPR, HIPAA, and State Law Requirements</title>
      <link>https://g8kepr.com/blog/breach-notification-2026</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/breach-notification-2026</guid>
      <pubDate>Sun, 15 Mar 2026 04:00:00 GMT</pubDate>
      <description>A data breach triggers notification obligations across multiple frameworks simultaneously. GDPR gives you 72 hours. HIPAA gives you 60 days. State laws give you anywhere from 30 to 90 days. Here is how to navigate overlapping obligations without missing a deadline.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Memory Poisoning in AI Agents: The Persistent Threat to Long-Running Systems</title>
      <link>https://g8kepr.com/blog/memory-poisoning-ai-agents</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/memory-poisoning-ai-agents</guid>
      <pubDate>Sun, 15 Mar 2026 04:00:00 GMT</pubDate>
      <description>AI agents with persistent memory can be compromised through a single malicious interaction that embeds false beliefs into long-term storage. Those beliefs persist across sessions, across resets, and across users — creating a durable foothold that outlasts typical incident response.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Shadow API Discovery: Finding APIs You Forgot You Had</title>
      <link>https://g8kepr.com/blog/shadow-api-discovery</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/shadow-api-discovery</guid>
      <pubDate>Thu, 12 Mar 2026 04:00:00 GMT</pubDate>
      <description>Shadow APIs are endpoints that exist in production but are not in your OpenAPI spec, not covered by your security controls, and not monitored. Every mature codebase has them. Here's how to find them before attackers do.</description>
      <category>Security</category>
    </item>
    <item>
      <title>What Is Model Context Protocol (MCP) and Why Does It Need Security?</title>
      <link>https://g8kepr.com/blog/what-is-mcp</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/what-is-mcp</guid>
      <pubDate>Tue, 10 Mar 2026 04:00:00 GMT</pubDate>
      <description>MCP is Anthropic's open standard for connecting AI models to external tools. It is rapidly becoming the default integration pattern for AI agents — and most teams deploying it have no visibility into what their models are actually calling.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Designing Secure APIs with OpenAPI 3.1: The Spec as Your Security Boundary</title>
      <link>https://g8kepr.com/blog/openapi-security-design</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/openapi-security-design</guid>
      <pubDate>Tue, 10 Mar 2026 04:00:00 GMT</pubDate>
      <description>An OpenAPI spec is not just documentation — it is a machine-readable security boundary. Every field defined in the spec is a validated field; every field not defined is rejected. Here is how to use OpenAPI 3.1 to enforce security properties at design time.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>What Mythos Means for API Security Teams: A Practitioner's Guide</title>
      <link>https://g8kepr.com/blog/what-mythos-means-api-security</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/what-mythos-means-api-security</guid>
      <pubDate>Tue, 10 Mar 2026 04:00:00 GMT</pubDate>
      <description>Mythos has shifted the conversation about AI security from theoretical risks to demonstrated exploits with CVSS scores. For API security teams, this means the threat model has changed. Here is what to prioritize and what to stop worrying about.</description>
      <category>Security</category>
    </item>
    <item>
      <title>GDPR Art. 22: What "Meaningful Information About Logic" Means in Code</title>
      <link>https://g8kepr.com/blog/gdpr-art22-explainability</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/gdpr-art22-explainability</guid>
      <pubDate>Thu, 05 Mar 2026 05:00:00 GMT</pubDate>
      <description>Article 22 requires that individuals subject to automated decisions receive "meaningful information about the logic involved." For LLM-based systems this is genuinely hard — but it is implementable. Here is the approach that satisfies regulators.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>HTTP/3 and QUIC: What Changes for API Security When You Move to UDP</title>
      <link>https://g8kepr.com/blog/http3-api-security</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/http3-api-security</guid>
      <pubDate>Thu, 05 Mar 2026 05:00:00 GMT</pubDate>
      <description>HTTP/3 replaces TCP with QUIC — a UDP-based protocol with built-in TLS 1.3. The security implications are mostly positive, but the change also introduces new considerations for rate limiting, traffic inspection, and DDoS mitigation. Here is what security teams need to know.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>Multi-Agent Cascading Failures: Architecture Patterns That Prevent Meltdowns</title>
      <link>https://g8kepr.com/blog/multi-agent-cascading-failures</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/multi-agent-cascading-failures</guid>
      <pubDate>Thu, 05 Mar 2026 05:00:00 GMT</pubDate>
      <description>When one agent in a multi-agent pipeline fails or is compromised, the failure can propagate through the entire system in seconds. We examine three real-world cascading failure patterns and the architectural controls that contain them.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>Rate Limiting for AI APIs: Token Bucket vs Sliding Window vs Token Budget</title>
      <link>https://g8kepr.com/blog/rate-limiting-ai-apis</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/rate-limiting-ai-apis</guid>
      <pubDate>Sat, 28 Feb 2026 05:00:00 GMT</pubDate>
      <description>Traditional API rate limiting counts requests. AI APIs need to count tokens. A single malicious request that consumes 100K tokens in one call is not caught by a "100 requests per minute" rule. Here is how to rate limit AI endpoints correctly.</description>
      <category>Architecture</category>
    </item>
    <item>
      <title>Policy Puppetry: How Attackers Use XML Tags to Override Your System Prompt</title>
      <link>https://g8kepr.com/blog/policy-puppetry-attacks</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/policy-puppetry-attacks</guid>
      <pubDate>Sat, 28 Feb 2026 05:00:00 GMT</pubDate>
      <description>Policy puppetry wraps malicious instructions in XML, JSON, or INI config-style wrappers that exploit patterns in LLM pre-training data. The attack makes instructions look like configuration rather than user input — and many models follow configuration more readily than user messages.</description>
      <category>Security</category>
    </item>
    <item>
      <title>Vendor Risk Management for AI APIs: What to Ask Your LLM Provider</title>
      <link>https://g8kepr.com/blog/vendor-risk-llm-providers</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/vendor-risk-llm-providers</guid>
      <pubDate>Wed, 25 Feb 2026 05:00:00 GMT</pubDate>
      <description>Your LLM provider processes your customer data, your system prompts, and your training signals. Their security posture is your security posture. Most vendor security questionnaires were not written with AI providers in mind. Here is what to ask instead.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>EU AI Act Logging Requirements: What Engineers Need to Build</title>
      <link>https://g8kepr.com/blog/eu-ai-act-logging-requirements</link>
      <guid isPermaLink="true">https://g8kepr.com/blog/eu-ai-act-logging-requirements</guid>
      <pubDate>Wed, 25 Feb 2026 05:00:00 GMT</pubDate>
      <description>Article 12 of the EU AI Act mandates automatic logging of AI system operations. This is not a check-the-box compliance exercise — it requires substantive engineering. Here is exactly what the regulation requires and what to build.</description>
      <category>Compliance</category>
    </item>
  </channel>
</rss>