作者: Sekitoshi Kanai , Yasutoshi Ida , Yuki Yamanaka , Hiroshi Takahashi , Masanori Yamada
DOI:
关键词:
摘要: Deep neural networks are vulnerable to adversarial attacks. Recent studies of robustness focus on the loss landscape in parameter space since it is related optimization performance. These conclude that hard optimize function for training with respect parameters because not smooth: i.e., its gradient Lipschitz continuous. However, this analysis ignores dependence attacks parameters. Since worst noise models, they should depend models. In study, we analyze smoothness binary linear classification considering dependence. We reveal continuity depends types constraints case. Specifically, under L2 constraints, smooth except at zero.