I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples, 2014.

N. Papernot, P. Mcdaniel, S. Jha, M. Fredrikson, Z. B. Celik et al., The limitations of deep learning in adversarial settings, Proc. IEEE Eur. Symp. Secur. Privacy (EuroS&P), pp.372-387, 2016.

M. Cisse, Y. Adi, N. Neverova, and J. Keshet, Houdini: Fooling deep structured prediction models, 2017.

M. Melis, A. Demontis, B. Biggio, G. Brown, G. Fumera et al., Is deep learning safe for robot vision? Adversarial examples against the iCub humanoid, Proc. IEEE Int. Conf. Comput. Vis. (CVPR), pp.751-759, 2017.

Y. Liu, X. Chen, C. Liu, and D. Song, Delving into transferable adversarial examples and black-box attacks, 2016.

S. A. Fezza, Y. Bakhti, W. Hamidouche, and O. Déforges, Perceptual evaluation of adversarial attacks for CNN-based image classification, Proc. 11th IEEE Int. Conf. Qual. Multimedia Exper. (QoMEX), pp.1-6, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02302604

N. Papernot, P. Mcdaniel, and I. Goodfellow, Transferability in machine learning: From phenomena to black-box attacks using adversarial samples, 2016.

A. Kerckhoffs, La cryptographie militaire, J. des Sci. Militaires, vol.9, pp.5-38, 1883.

C. E. Shannon, Communication theory of secrecy systems, Bell Labs Tech. J, vol.28, issue.4, pp.656-715, 1949.

C. Guo, M. Rana, M. Cisse, and L. Van-der-maaten, Countering adversarial images using input transformations, 2017.

F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu et al., Defense against adversarial attacks using high-level representation guided denoiser, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp.1778-1787, 2018.

N. Carlini and D. Wagner, Towards evaluating the robustness of neural networks, Proc. IEEE Symp. Secur. Privacy (SP), pp.39-57, 2017.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan et al., Intriguing properties of neural networks, 2013.

X. Yuan, P. He, Q. Zhu, and X. Li, Adversarial examples: Attacks and defenses for deep learning, IEEE Trans. Neural Netw. Learn. Syst, vol.30, issue.9, pp.2805-2824, 2019.

N. Akhtar and A. Mian, Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, vol.6, pp.14410-14430, 2018.

S. Qiu, Q. Liu, S. Zhou, and C. Wu, Review of artificial intelligence adversarial attack and defense technologies, Appl. Sci, vol.9, issue.5, p.909, 2019.

Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu et al., Boosting adversarial attacks with momentum, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp.9185-9193, 2018.

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Towards deep learning models resistant to adversarial attacks, 2017.

F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh et al., Ensemble adversarial training: Attacks and defenses, 2017.

S. Moosavi-dezfooli, A. Fawzi, and P. Frossard, DeepFool: A simple and accurate method to fool deep neural networks, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp.2574-2582, 2016.

N. Papernot, P. Mcdaniel, X. Wu, S. Jha, and A. Swami, Distillation as a defense to adversarial perturbations against deep neural networks,'' in Proc, IEEE Symp. Secur. Privacy (SP), pp.582-597, 2016.

G. Hinton, O. Vinyals, and J. Dean, Distilling the knowledge in a neural network, 2015.

D. Meng and H. Chen, MagNet: A two-pronged defense against adversarial examples, Proc. ACM SIGSAC Conf. Comput. Commun. Secur, pp.135-147, 2017.

Y. Bengio, L. Yao, G. Alain, and P. Vincent, Generalized denoising autoencoders as generative models,'' in Proc, Adv. Neural Inf. Process. Syst, pp.899-907, 2013.

N. Carlini, G. Katz, C. Barrett, and D. L. Dill, Provably minimallydistorted adversarial examples, 2018.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, vol.86, pp.2278-2324, 1998.

A. Krizhevsky and G. Hinton, Learning multiple layers of features from tiny images, 2009.

N. Papernot, Technical report on the CleverHans v2.1.0 adversarial examples library, 2016.

M. Abadi, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.

P. Samangouei, M. Kabkab, and R. Chellappa, Defense-GAN: Protecting classifiers against adversarial attacks using generative models, 2018.

N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber et al., On evaluating adversarial robustness, 2019.

P. Nicolas, M. Patrick, G. Ian, J. Somesh, C. Z. Berkay et al., Practical black-box attacks against machine learning, Proc. ACM Asia Conf. Comput. Commun. Secur, pp.506-519, 2017.

V. Srinivasan, A. Marban, K. Müller, W. Samek, and S. Nakajima, Robustifying models against adversarial attacks by Langevin dynamics, 2018.

S. Song, Y. Chen, N. Cheung, and C. J. Kuo, Defense against adversarial attacks with saak transform, 2018.