Smoothness Analysis of Loss Functions of Adversarial Training.

作者: Sekitoshi Kanai , Yasutoshi Ida , Yuki Yamanaka , Hiroshi Takahashi , Masanori Yamada

DOI:

关键词:

摘要: Deep neural networks are vulnerable to adversarial attacks. Recent studies of robustness focus on the loss landscape in parameter space since it is related optimization performance. These conclude that hard optimize function for training with respect parameters because not smooth: i.e., its gradient Lipschitz continuous. However, this analysis ignores dependence attacks parameters. Since worst noise models, they should depend models. In study, we analyze smoothness binary linear classification considering dependence. We reveal continuity depends types constraints case. Specifically, under L2 constraints, smooth except at zero.

参考文章(25)
Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, Riccardo Zecchina, Entropy-SGD: biasing gradient descent into wide valleys* Journal of Statistical Mechanics: Theory and Experiment. ,vol. 2019, pp. 124018- ,(2019) , 10.1088/1742-5468/AB39D9
Ian Goodfellow, Samy Bengio, Alexey Kurakin, Adversarial Machine Learning at Scale arXiv: Computer Vision and Pattern Recognition. ,(2016)
Yoshua Bengio, Razvan Pascanu, Samy Bengio, Laurent Dinh, Sharp minima can generalize for deep nets international conference on machine learning. pp. 1019- 1028 ,(2017)
Yoshua Bengio, Amos J. Storkey, Devansh Arpit, Asja Fischer, Stanislaw Jastrzebski, Zachary Kenton, Nicolas Ballas, Three Factors Influencing Minima in SGD arXiv: Learning. ,(2017)
Dimitri Bertsekas, Nonlinear Programming ,(1995)
Anish Athalye, Andrew Ilyas, Logan Engstrom, Evaluating and Understanding the Robustness of Adversarial Logit Pairing. arXiv: Machine Learning. ,(2018)
J. Zico Kolter, Elan Rosenfeld, Jeremy M. Cohen, Certified Adversarial Robustness via Randomized Smoothing international conference on machine learning. pp. 1310- 1320 ,(2019)
Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, None, Explaining and Harnessing Adversarial Examples international conference on learning representations. ,(2015)
Masashi Sugiyama, Issei Sato, Yusuke Tsuzuku, Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks neural information processing systems. ,vol. 31, pp. 6541- 6550 ,(2018)
Behnam Neyshabur, Srinadh Bhojanapalli, Nathan Srebro, David McAllester, Exploring Generalization in Deep Learning neural information processing systems. ,vol. 30, pp. 5947- 5956 ,(2017)