Training Modern Deep Neural Networks for Memory-Fault Robustness.

作者: Vincent Gripon , François Leduc-Primeau , François Gagnon , Ghouthi Boukli Hacene , Amal Ben Soussia

DOI:

关键词:

摘要: Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energy-constrained systems is challenging. In this paper, we investigate the solution reducing supply voltage memories used system, which results bit-cell faults. We explore robustness state-of-the-art DNN architectures towards such defects propose regularizer meant to mitigate effects accuracy. Our experiments clearly demonstrate interest operating system faulty regime save energy without

参考文章(26)
Ronald G. Dreslinski, Michael Wieckowski, David Blaauw, Dennis Sylvester, Trevor Mudge, Near-Threshold Computing: Reclaiming Moore's Law Through Energy Efficient Integrated Circuits Proceedings of the IEEE. ,vol. 98, pp. 253- 266 ,(2010) , 10.1109/JPROC.2009.2034764
Zhuo Wang, Kyong Ho Lee, Naveen Verma, Overcoming Computational Errors in Sensing Platforms Through Embedded Machine-Learning Kernels IEEE Transactions on Very Large Scale Integration Systems. ,vol. 23, pp. 1459- 1470 ,(2015) , 10.1109/TVLSI.2014.2342153
Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhiwei Xu, Ninghui Sun, Olivier Temam, DaDianNao: A Machine-Learning Supercomputer international symposium on microarchitecture. pp. 609- 622 ,(2014) , 10.1109/MICRO.2014.58
Ilya Sutskever, Geoffrey Hinton, Alex Krizhevsky, Ruslan Salakhutdinov, Nitish Srivastava, Dropout: a simple way to prevent neural networks from overfitting Journal of Machine Learning Research. ,vol. 15, pp. 1929- 1958 ,(2014)
Olivier Temam, A defect-tolerant accelerator for emerging high-performance applications ACM SIGARCH Computer Architecture News. ,vol. 40, pp. 356- 367 ,(2012) , 10.1145/2366231.2337200
Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin, Eunhyeok Park, Yong-Deok Kim, Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications arXiv: Computer Vision and Pattern Recognition. ,(2015)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition computer vision and pattern recognition. pp. 770- 778 ,(2016) , 10.1109/CVPR.2016.90
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Identity Mappings in Deep Residual Networks Computer Vision – ECCV 2016. pp. 630- 645 ,(2016) , 10.1007/978-3-319-46493-0_38
Yoshua Bengio, Matthieu Courbariaux, Ran El-Yaniv, Itay Hubara, Daniel Soudry, Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 arXiv: Learning. ,(2016)
Guillaume Soulié, Vincent Gripon, Maëlys Robert, Compression of Deep Neural Networks on the Fly international conference on artificial neural networks. pp. 153- 160 ,(2016) , 10.1007/978-3-319-44781-0_19