AI Red Teaming: Volume2: Applied Practice and Program Operations (AI Red Teaming - A Practical Guide to Safer AI)
Format:
Hardcover
En stock
1.11 kg
Sí
Nuevo
Amazon
USA
- AI Red Teaming — Volume 2: Applied Practice & Program OperationsYou have the methodology. Now make it work in the real world.Volume 1 gave you the engagement framework: threat modeling, testing techniques, lifecycle, templates, automation, and metrics. This volume answers the questions that surface the moment you try to operationalize that framework inside an actual organization. What does a RAG data leakage incident look like end to end? How do you staff a team that can execute AI red teaming continuously? When the CFO asks how much an unpatched finding could cost, what number do you give? And when the incident happens anyway, who says what to whom?Part III opens with eight detailed hypothetical case studies, each built from publicly documented incident patterns and recurring vulnerability classes. They span the full attack surface: retrieval-augmented generation leaking documents across tenant boundaries, an agent escalating privileges through chained tool calls, supply chain poisoning degrading a predictive model over months, hallucination-driven liability in a financial advisory system, memory features quietly accumulating personal data, and indirect injection manipulating procurement decisions. Every case study follows the same structure—scenario, attack chain, root cause, controls, regression tests, and detection signals—so you can use them as templates for your own tabletop exercises and post-incident reviews.Chapter 12 translates findings into defensive action. The attack-to-control mapping shows which controls address which attack classes, organized by enforcement point rather than vendor product. Implementation playbooks cover the controls that matter most, with baseline defensive profiles for common architectures. The remediation workflow prevents the most common failure mode: findings that are triaged but never fixed.Chapter 13 maps emerging trends—agentic AI, multi-model orchestration, fine-tuning risks, synthetic data supply chains, and regulatory momentum—and translates each into concrete testing priorities for the next twelve months.Chapter 14 provides hands-on exercises producing reusable artifacts: annotated architecture diagrams, canary-based regression suites, calibrated severity rubrics, incident playbooks validated through tabletop simulation, and automation skeletons wired into CI/CD.Part IV shifts to program operations. Chapter 15 covers team structure—required skillsets, operating models, internal versus external teaming, training pathways, and integrating AI red teaming into AppSec, SOC, GRC, and engineering workflows without creating process friction.Chapter 16 is quantitative risk assessment: scenario-based models that translate red team evidence into expected annual loss ranges, risk reduction ROI, and sensitivity analyses. It exists so the conversation with the CFO ends with a budget decision, not "further analysis."Chapter 17 addresses vendor and third-party AI testing—the reality that most organizations consume AI through systems they do not control. Contractual frameworks, constraint mapping, black-box and gray-box approaches, and evidence requirements for vendor risk governance.Chapter 18 provides crisis communication playbooks for AI incidents, where the evidence is probabilistic, the blast radius is uncertain, and disclosure obligations may span multiple jurisdictions.Nine appendices complete the reference: engagement checklists, controls quick reference, regulatory mappings across EU AI Act, NIST AI RMF, and ISO 42001, a full glossary, data leakage surface catalog, cloud platform security, international legal framework, a proposed AI vulnerability taxonomy, and cross-industry implementation guides.
IMPORTÁ FACIL
Comprando este producto podrás descontar el IVA con tu número de RUT
NO CONSUME FRANQUICIA
Si tu carrito tiene solo libros o CD’s, no consume franquicia y podés comprar hasta U$S 1000 al año.