作者: Simon Lacoste-Julien , Pascal Vincent , William L. Hamilton , Gauthier Gidel , Hugo Berard
DOI:
关键词:
摘要: The existence of adversarial examples capable of fooling trained neural network classifiers calls for a much better understanding of possible attacks to guide the development of …