DDSA: a Defense against Adversarial Attacks using Deep Denoising Sparse Autoencoder - Archive ouverte HAL Access content directly
Journal Articles IEEE Access Year : 2019

DDSA: a Defense against Adversarial Attacks using Deep Denoising Sparse Autoencoder

Abstract

Given their outstanding performance, the Deep Neural Networks (DNNs) models have been deployed in many real-world applications. However, recent studies have demonstrated that they are vulnerable to small carefully crafted perturbations, i.e., adversarial examples, which considerably decrease their performance and can lead to devastating consequences, especially for safety-critical applications, such as autonomous vehicles, healthcare and face recognition. Therefore, it is of paramount importance to offer defense solutions that increase the robustness of DNNs against adversarial attacks. In this paper, we propose a novel defense solution based on a Deep Denoising Sparse Autoencoder (DDSA). The proposed method is performed as a pre-processing step, where the adversarial noise of the input samples is removed before feeding the classifier. The pre-processing defense block can be associated with any classifier, without any change to their architecture or training procedure. In addition, the proposed method is a universal defense, since it does not require any knowledge about the attack, making it usable against any type of attack. The experimental results on MNIST and CIFAR-10 datasets have shown that the proposed DDSA defense provides a high robustness against a set of prominent attacks under white-, gray- and black-box settings, and outperforms state-of-the-art defense methods.
Fichier principal
Vignette du fichier
hal-02349625.pdf (2.05 Mo) Télécharger le fichier
Origin : Publisher files allowed on an open archive
Loading...

Dates and versions

hal-02349625 , version 1 (07-07-2020)

Identifiers

Cite

Yassine Bakhti, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges. DDSA: a Defense against Adversarial Attacks using Deep Denoising Sparse Autoencoder. IEEE Access, 2019, 7, pp.160397-160407. ⟨10.1109/ACCESS.2019.2951526⟩. ⟨hal-02349625⟩
178 View
592 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More