new
Academy
Enterprise
Offensive
Explore AI evasion tactics with a new Academy Module
The AI Evasion - Foundations module covers inference‑time evasion attacks against AI models while showing you how to manipulate inputs to bypass classifiers and force targeted misclassifications in white and black‑box settings. Designed for anyone securing deployed AI models, you’ll anchor these concepts through hands-on practice with the GoodWords attack which highlights a fundamental approach to adversarial manipulation.
Key learning outcomes include:
- Impacts of data poisoning attacks on AI models
- Differences between training-time attacks and inference-time manipulation
- Label manipulation techniques to alter ground truth
- Effects of adversarial directives when used in tandem with prompt injection
