Denoising Autoencoders for Overgeneralization in Neural Networks

作者: Giacomo Spigler

DOI: 10.1109/TPAMI.2019.2909876

关键词:

摘要: Despite recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem overgeneralization, due their partitioning full input space into fixed set target classes used during training. Thus it is possible for novel inputs belonging categories unknown training or even completely unrecognizable humans fool system classifying them as one known classes, with high degree confidence. This can lead security problems in critical and closely linked open recognition 1-class recognition. paper presents way compute confidence score using reconstruction error denoising autoencoders shows how correctly identify regions close distribution. The proposed solution tested benchmarks ‘fooling’, constructed from MNIST Fashion-MNIST datasets.

参考文章(8)
Diederik P. Kingma, Jimmy Ba, Adam: A Method for Stochastic Optimization arXiv: Learning. ,(2014)
Amir Ahmad, Lipika Dey, A k-mean clustering algorithm for mixed numeric and categorical data data and knowledge engineering. ,vol. 63, pp. 503- 527 ,(2007) , 10.1016/J.DATAK.2007.03.016
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol, Extracting and composing robust features with denoising autoencoders Proceedings of the 25th international conference on Machine learning - ICML '08. pp. 1096- 1103 ,(2008) , 10.1145/1390156.1390294
Markos Markou, Sameer Singh, Novelty detection: a review—part 1: statistical approaches Signal Processing. ,vol. 83, pp. 2481- 2497 ,(2003) , 10.1016/J.SIGPRO.2003.07.018
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Andreas Müller, Joel Nothman, Gilles Louppe, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay, Scikit-learn: Machine Learning in Python Journal of Machine Learning Research. ,vol. 12, pp. 2825- 2830 ,(2011)
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, Li Fei-Fei, ImageNet Large Scale Visual Recognition Challenge International Journal of Computer Vision. ,vol. 115, pp. 211- 252 ,(2015) , 10.1007/S11263-015-0816-Y
Ilya Sutskever, Ian J. Goodfellow, Gregory S. Corrado, Michael Isard, Matthieu Devin, Vincent Vanhoucke, Martin Wicke, Manjunath Kudlur, Rajat Monga, Vijay Vasudevan, Geoffrey Irving, Yangqing Jia, Fernanda B. Viégas, Kunal Talwar, Martin Wattenberg, Ashish Agarwal, Martín Abadi, Yuan Yu, Rafal Józefowicz, Craig Citro, Sherry Moore, Paul Barham, Benoit Steiner, Pete Warden, Josh Levenberg, Derek Gordon Murray, Paul A. Tucker, Jonathon Shlens, Jeffrey Dean, Xiaoqiang Zheng, Chris Olah, Andy Davis, Dan Mané, Mike Schuster, Sanjay Ghemawat, Andrew Harp, Oriol Vinyals, Eugene Brevdo, Zhifeng Chen, Lukasz Kaiser, TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems arXiv: Distributed, Parallel, and Cluster Computing. ,(2015)
Han Xiao, Kashif Rasul, Roland Vollgraf, Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms arXiv: Learning. ,(2017)