The
AI Evasion - Sparsity Attacks
module explores sparsity-constrained adversarial attacks that minimize the number of modified input features while showing how to craft targeted misclassifications by changing only the most impactful pixels through L0-focused optimization and saliency-guided feature selection. Complete with hands-on exercises, you’ll gain a comprehensive overview of techniques that generate adversarial examples under strict sparsity constraints.
Key learning outcomes include:
  • Mathematical foundations of sparsity-constrained optimization
  • FISTA optimization for solving the non-smooth ElasticNet objective with proximal gradient descent and momentum acceleration
  • Single-pixel and pairwise JSMA variants
  • ElasticNet (EAD) and Jacobian‑based Saliency Map Attack (JSMA) paths to Sparse Perturbations
IMG_0142 (1)