Plain-language definitions of key concepts in API security, AI security, Model Context Protocol, prompt injection, zero trust, and AI gateways — written for security teams and developers building with AI.
8
Definitions
4
Categories
OWASP
Referenced
API security is the practice of protecting application programming interfaces from attacks, misuse, and unauthorized access. It covers authentication, authorization, input validation, rate limiting, threat detection, and compliance monitoring across REST, GraphQL, and other API protocols.
Zero Trust API Security applies the principle of "never trust, always verify" to API traffic. Every request — regardless of origin — is authenticated, authorized, and validated before being processed, eliminating the concept of a trusted network perimeter.
API rate limiting controls the number of requests a client can make to an API within a defined time window. It protects APIs from abuse, DDoS attacks, and resource exhaustion while ensuring fair usage across all consumers.
Prompt injection is an attack where malicious input manipulates an AI model's instructions, causing it to ignore safety guidelines, reveal confidential data, or take unauthorized actions. It is the OWASP #1 vulnerability for LLM applications.
LLM security encompasses the controls, monitoring, and policies needed to safely deploy large language models in production. It addresses prompt injection, data leakage, model abuse, output validation, and compliance requirements for AI-powered applications.
AI agent security is the set of controls that govern how autonomous AI agents interact with external tools, APIs, and data. As AI agents gain the ability to take real-world actions — browsing the web, writing code, calling APIs — securing their tool access becomes critical.
G8KEPR puts every concept in this glossary into practice — API threat detection, MCP security, AI gateway controls, and compliance documentation in one platform.