Show simple item record

dc.contributor.advisorLi, Peng
dc.creatorChang, Ting-Jui
dc.date.accessioned2019-01-23T21:22:06Z
dc.date.available2020-12-01T07:32:06Z
dc.date.created2018-12
dc.date.issued2018-11-19
dc.date.submittedDecember 2018
dc.identifier.urihttps://hdl.handle.net/1969.1/174573
dc.description.abstractIn recent years, deep neural networks have demonstrated outstanding performance in many machine learning tasks. However, researchers have discovered that these state-of-the-art models are vulnerable to adversarial examples: legitimate examples added by small perturbations which are unnoticeable to human eyes. Adversarial training, which augments the training data with adversarial examples during the training process, is a well known defense to improve the robustness of the model against adversarial attacks. However, this robustness is only effective to the same attack method used for adversarial training. Madry has suggested that effectiveness of iterative multi-step adversarial attacks and particularly that projected gradient descent (PGD) may be considered the universal first order adversary and applying the adversarial training with PGD implies resistance against many other first order attacks. However, the computational cost of the adversarial training with PGD and other multi-step adversarial examples is much higher than that of the adversarial training with other simpler attack techniques. In this work, we show how strong adversarial examples can be generated only at a cost similar to that of two runs of the fast gradient sign method (FGSM), allowing defense against adversarial attacks with a robustness level comparable to that of the adversarial training with multi-step adversarial examples. We empirically demonstrate the effectiveness of the proposed two-step defense approach against different attack methods and its improvements over existing defense strategies.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectAdversarial attacken
dc.subjectwhite boxen
dc.subjectiterative attack methodsen
dc.titleEnhancing Resilience Against Adversarial Attacks of Deep Neural Networks Using Efficient Two-Step Adversarial Defenseen
dc.typeThesisen
thesis.degree.departmentElectrical and Computer Engineeringen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameMaster of Scienceen
thesis.degree.levelMastersen
dc.contributor.committeeMemberGratz, Paul
dc.contributor.committeeMemberWang, Zhangyang
dc.type.materialtexten
dc.date.updated2019-01-23T21:22:07Z
local.embargo.terms2020-12-01
local.etdauthor.orcid0000-0002-0037-6564


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record