TechStreamTechStream
Books/Vol. V
VOLUME V · COMING SOON

AI and Agentic Systems Security for DevSecOps

LLM Threats, Agent Authorization, Prompt Injection Defense, and the OWASP LLM Top 10

Chapters20
Parts5
SeriesDevSecOps
Prompt InjectionPOLAAgent ForensicsOWASP LLM Top 10EU AI Act
Notify me when it launches
AI and Agentic Systems Security for DevSecOps
What this book solves

AI agents are being granted production access faster than security models can keep up. They read codebases, write pull requests, execute shell commands, query databases, and call external APIs — all autonomously. This volume is the security engineering reference for teams deploying agents in systems that matter.

The threat model chapter maps every OWASP LLM Top 10 vulnerability to concrete attack scenarios in production pipelines: prompt injection via pull request descriptions, insecure output handling in code generation agents, training data poisoning in fine-tuned models, and model denial-of-service in shared inference infrastructure. Each vulnerability includes detection strategies, preventive controls, and a red team exercise.

The agent authorization chapter introduces and implements the Principle of Least Authority (POLA) for agentic systems: capability-scoped tool definitions, session-scoped credential issuance, human-in-the-loop enforcement patterns, and the authorization boundary design that limits blast radius when an agent is compromised or behaves unexpectedly.

Multi-agent trust chains are the book's most forward-looking section: how to establish identity between agents in a pipeline, the A2A (Agent-to-Agent) protocol, attestation of agent actions for audit purposes, and the adversarial scenarios where malicious agents attempt to manipulate peer agents through injected context. Includes a complete implementation of a secure multi-agent DevSecOps pipeline using Claude, Gemini, and open-source models with verifiable action logs.

After reading this volume you will

Four concrete capabilities you will have

1

Implement defense-in-depth against prompt injection: input sanitization, output validation, sandboxed tool execution, and canary token detection for indirect injection attacks

2

Design the POLA authorization model for AI agents: capability-scoped tool definitions, session-bound credentials, and the human-in-the-loop enforcement architecture

3

Build verifiable agent action logs that satisfy EU AI Act Article 12 traceability requirements and support post-incident forensics

4

Threat model a multi-agent pipeline against the OWASP LLM Top 10 and generate a risk-scored control list for your specific agent architecture

Core concept

The idea behind Volume V

AI & Agentic SecurityUserpromptInjection Defenseprompt sanitizationintent classificationLLM Agentplan → act → observecontext windowPOLA Boundary — Principle of Least Authoritycapability scope enforced herefile readbash execAPI callmem storetrustboundaryAudit Logevery tool call recordedimmutable, signed tracesOutput GuardrailsPII redactioncontent policy filterModel Card: provider trust, fine-tuning supply chain, eval benchmarksOWASP LLM Top 10 — prompt injection, insecure output, model theftleast privilege + observability = safe agentic systems
Table of contents

5 parts · 20 chapters

01

Part I — The AI Threat Landscape

OWASP LLM Top 10 deep-dive with DevSecOps-specific attack scenarios, the LLM attack surface model (input vectors, output channels, tool interfaces, training pipeline), adversarial ML basics for practitioners, and the EU AI Act compliance overview for high-risk AI systems.

02

Part II — Prompt Injection Defense

Direct vs. indirect prompt injection taxonomy, input sanitization strategies (allowlisting, schema validation, canary tokens), output validation patterns, sandboxed tool execution architecture, and the red team exercise library for testing injection resilience in your agents.

03

Part III — Agent Authorization and POLA

Principle of Least Authority applied to AI agents: capability-scoped tool definitions, session-scoped credential issuance via Vault or cloud IAM, human-in-the-loop patterns (approval gates, confidence thresholds, action budgets), and blast radius containment design.

04

Part IV — Multi-Agent Trust Chains

Agent identity and attestation (SPIFFE for agents), the A2A protocol for inter-agent communication, adversarial agent scenarios (context poisoning, capability escalation), and implementing a secure multi-agent pipeline with verifiable action logs.

05

Part V — AI Forensics and Compliance

Agent session lifecycle forensics, evidence collection for AI incidents, the Five Questions Framework for AI agent post-mortems, EU AI Act Article 12 implementation, and the AI governance integration pattern for existing DevSecOps programs.

Launching 2026 — Early access available

Be the first to read Volume V

Join the waitlist for early access, release announcements, and sample chapters. No spam — one email when it ships.