Enhancing Resilience Against Adversarial Attacks of Deep Neural Networks Using Efficient Two-Step Adversarial Defense
Abstract
In recent years, deep neural networks have demonstrated outstanding performance in many machine
learning tasks. However, researchers have discovered that these state-of-the-art models are
vulnerable to adversarial examples: legitimate examples added by small perturbations which are
unnoticeable to human eyes. Adversarial training, which augments the training data with adversarial
examples during the training process, is a well known defense to improve the robustness of
the model against adversarial attacks. However, this robustness is only effective to the same attack
method used for adversarial training. Madry has suggested that effectiveness of iterative multi-step
adversarial attacks and particularly that projected gradient descent (PGD) may be considered the
universal first order adversary and applying the adversarial training with PGD implies resistance
against many other first order attacks. However, the computational cost of the adversarial training
with PGD and other multi-step adversarial examples is much higher than that of the adversarial
training with other simpler attack techniques. In this work, we show how strong adversarial examples
can be generated only at a cost similar to that of two runs of the fast gradient sign method
(FGSM), allowing defense against adversarial attacks with a robustness level comparable to that
of the adversarial training with multi-step adversarial examples. We empirically demonstrate the
effectiveness of the proposed two-step defense approach against different attack methods and its
improvements over existing defense strategies.
Citation
Chang, Ting-Jui (2018). Enhancing Resilience Against Adversarial Attacks of Deep Neural Networks Using Efficient Two-Step Adversarial Defense. Master's thesis, Texas A & M University. Available electronically from https : / /hdl .handle .net /1969 .1 /174573.