Adversarial Machine Learning

Deep dive into the science of adversarial attacks and defenses for machine learning. Learn the mathematics and implementation of FGSM, PGD, and C&W attacks, understand data poisoning and model inversion, build robust defenses with adversarial training and certified robustness, and use the Adversarial Robustness Toolbox (ART) for practical experimentation.

7
Lessons
40+
Examples
~4hr
Total Time
📈
Research-Grade

What You'll Learn

This course provides research-depth coverage of adversarial ML, from foundational attacks to state-of-the-art defenses.

Attack Techniques

Master FGSM, PGD, C&W, and other evasion attacks. Understand gradient-based, score-based, and decision-based attack paradigms.

💣

Data Poisoning

Learn backdoor attacks, label flipping, and clean-label poisoning techniques that corrupt models during training.

🛡

Defense Strategies

Implement adversarial training, defensive distillation, input preprocessing, and certified robustness defenses.

🛠

ART Library

Hands-on practice with IBM's Adversarial Robustness Toolbox for implementing attacks and defenses.

Course Lessons

Follow the lessons in order for a comprehensive understanding of adversarial machine learning.

Prerequisites

What you need before starting this course.

Before You Begin:
  • Solid understanding of deep learning (neural networks, backpropagation, loss functions)
  • Proficiency in Python and PyTorch or TensorFlow
  • Basic knowledge of calculus and linear algebra
  • Familiarity with image classification tasks