new
Academy
Enterprise
Offensive
Discover gradient-based adversarial attacks in new Academy Module
The
AI Evasion - First Order Attacks
module provides a deep exploration of the mechanics of gradient-based evasion techniques that target neural network classifiers. Through hands-on exercises, you’ll learn how to implement and analyze key evasion techniques, including the Fast Gradient Sign Method (FGSM), Iterative FGSM (I-FGSM), and DeepFool, to develop a better understanding of how neural network classifiers can be manipulated and how to defend against such attacks.Key learning outcomes include:
- How to evaluate model robustness and apply defensive strategies such as adversarial training and input filtering
- Mechanics of first-order evasion attacks, including how gradients are used to manipulate model predictions
- Applying the concept of input perturbation norms to measure the size and impact of adversarial changes
- Assessing the gap between model accuracy and true robustness
