new
Academy
Enterprise
Defensive
Master AI Privacy and Defense with two new Academy modules
The
AI Privacy
module introduces you to one of the most critical privacy threats in machine learning: the ability to determine whether a specific individual’s data was included in a model’s training set. You’ll explore how overfitting creates detectable behavioral signals, implement real membership inference attacks using the shadow model methodology, and apply industry‑recommended defenses such as differential privacy.Key learning outcomes:
- Implementing shadow models and attack classifiers to detect membership based on prediction confidence patterns
- Understanding differential privacy and applying DP-SGD to train privacy‑preserving models
- Using PATE to achieve privacy through architectural separation
- Evaluating and mitigating privacy leakage across machine learning systems

The
AI Defense
module introduces you to the strategies and techniques for protecting AI applications from attacks explored in the AI Red Teamer path. You’ll explore how to proactively harden models through adversarial training and tuning, as well as implement LLM guardrails to enforce safety and reliability at the application layer.Key learning outcomes:
- Understanding adversarial tuning and applying it to refine model behavior against evolving threats
- Learning the basic concepts of LLM guardrails and implementing them at the application layer
- Building multi-layered defenses that combine model-level and application-level safeguards
- Designing AI applications that maintain security, reliability, and user trust against a variety of attack vectors
