Skip to Main content Skip to Navigation

Deep Learning in Adversarial Context

Abstract : This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve the performance of adversarial attacks in the aspect of speed, magnitude of distortion, and invisibility. We contribute by defining invisibility with smoothness and integrating it into the optimization of producing adversarial examples. We succeed in creating smooth adversarial perturbations with less magnitude of distortion. To improve the efficiency of producing adversarial examples, we propose an optimization algorithm, i.e. Boundary Projection (BP) attack, based on the knowledge of the adversarial problem. BP attack searches against the gradient of the network to lead to misclassification when the current solution is not adversarial. It searches along the boundary to minimize the distortion when the current solution is adversarial. BP succeeds to generate adversarial examples with low distortion efficiently. Moreover, we also study the defenses. We apply patch replacement on both images and features. It removes the adversarial effects by replacing the input patches with the most similar patches of training data. Experiments show patch replacement is cheap and robust against adversarial attacks.
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Wednesday, November 24, 2021 - 5:04:13 PM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM
Long-term archiving on: : Friday, February 25, 2022 - 7:45:23 PM


Version validated by the jury (STAR)


  • HAL Id : tel-03447254, version 1


Hanwei Zhang. Deep Learning in Adversarial Context. Neural and Evolutionary Computing [cs.NE]. École normale supérieure de Rennes, 2021. English. ⟨NNT : 2021ENSR0028⟩. ⟨tel-03447254⟩



Record views


Files downloads