AI and Agentic Systems Security for DevSecOps
LLM Threats, Agent Authorization, Prompt Injection Defense, and the OWASP LLM Top 10
AI agents are being granted production access faster than security models can keep up. They read codebases, write pull requests, execute shell commands, query databases, and call external APIs — all autonomously. This volume is the security engineering reference for teams deploying agents in systems that matter.
The threat model chapter maps every OWASP LLM Top 10 vulnerability to concrete attack scenarios in production pipelines: prompt injection via pull request descriptions, insecure output handling in code generation agents, training data poisoning in fine-tuned models, and model denial-of-service in shared inference infrastructure. Each vulnerability includes detection strategies, preventive controls, and a red team exercise.
The agent authorization chapter introduces and implements the Principle of Least Authority (POLA) for agentic systems: capability-scoped tool definitions, session-scoped credential issuance, human-in-the-loop enforcement patterns, and the authorization boundary design that limits blast radius when an agent is compromised or behaves unexpectedly.
Multi-agent trust chains are the book's most forward-looking section: how to establish identity between agents in a pipeline, the A2A (Agent-to-Agent) protocol, attestation of agent actions for audit purposes, and the adversarial scenarios where malicious agents attempt to manipulate peer agents through injected context. Includes a complete implementation of a secure multi-agent DevSecOps pipeline using Claude, Gemini, and open-source models with verifiable action logs.
Four concrete capabilities you will have
Implement defense-in-depth against prompt injection: input sanitization, output validation, sandboxed tool execution, and canary token detection for indirect injection attacks
Design the POLA authorization model for AI agents: capability-scoped tool definitions, session-bound credentials, and the human-in-the-loop enforcement architecture
Build verifiable agent action logs that satisfy EU AI Act Article 12 traceability requirements and support post-incident forensics
Threat model a multi-agent pipeline against the OWASP LLM Top 10 and generate a risk-scored control list for your specific agent architecture
The idea behind Volume V
5 parts · 20 chapters
Part I — The AI Threat Landscape
OWASP LLM Top 10 deep-dive with DevSecOps-specific attack scenarios, the LLM attack surface model (input vectors, output channels, tool interfaces, training pipeline), adversarial ML basics for practitioners, and the EU AI Act compliance overview for high-risk AI systems.
Part II — Prompt Injection Defense
Direct vs. indirect prompt injection taxonomy, input sanitization strategies (allowlisting, schema validation, canary tokens), output validation patterns, sandboxed tool execution architecture, and the red team exercise library for testing injection resilience in your agents.
Part III — Agent Authorization and POLA
Principle of Least Authority applied to AI agents: capability-scoped tool definitions, session-scoped credential issuance via Vault or cloud IAM, human-in-the-loop patterns (approval gates, confidence thresholds, action budgets), and blast radius containment design.
Part IV — Multi-Agent Trust Chains
Agent identity and attestation (SPIFFE for agents), the A2A protocol for inter-agent communication, adversarial agent scenarios (context poisoning, capability escalation), and implementing a secure multi-agent pipeline with verifiable action logs.
Part V — AI Forensics and Compliance
Agent session lifecycle forensics, evidence collection for AI incidents, the Five Questions Framework for AI agent post-mortems, EU AI Act Article 12 implementation, and the AI governance integration pattern for existing DevSecOps programs.
Be the first to read Volume V
Join the waitlist for early access, release announcements, and sample chapters. No spam — one email when it ships.