AxNN: energy-efficient neuromorphic systems using approximate computing

作者: Swagath Venkataramani , Ashish Ranjan , Kaushik Roy , Anand Raghunathan

DOI: 10.1145/2627369.2627613

关键词: Artificial intelligenceXeonEfficient energy useArtificial neural networkKey (cryptography)Computer engineeringNeuromorphic engineeringComputer scienceBackpropagationMachine learningEnergy (signal processing)Process (computing)

摘要: Neuromorphic algorithms, which are comprised of highly complex, large-scale networks artificial neurons, increasingly used for a variety recognition, classification, search and vision tasks. However, their computational energy requirements can be quite high, hence energy-efficient implementation is great interest. We propose new approach to design hardware implementations neural (NNs) using approximate computing. Our work motivated by the observations that (i) NNs in applications where less-than-perfect results acceptable, often inevitable, (ii) they resilient inexactness many (but not all) constituent computations. make two key contributions. First, we method transform any given NN into an Approximate Neural Network (AxNN). This performed adapting backpropagation technique, commonly train these networks, quantify impact approximating each neuron overall network quality (e.g., classification accuracy), selectively those neurons least. Further, observation training naturally error-healing process mitigate approximations neurons. Therefore, incrementally retrain with in-place, reclaiming significant portion ceded approximations. As second contribution, programmable quality-configurable neuromorphic processing engine (qcNPE), utilizes arrays specialized elements execute computations dynamically configurable accuracies AxNNs from diverse applications. evaluated proposed constructing AXNNs 6 recognition (ranging complexity 12–47,818 160–3,155,968 connections) executing them on different platforms - qcNPE containing 272 45nm technology commodity Intel Xeon server. demonstrate 1.14X–1.92X benefits virtually no loss (< 0.5%) output quality, even higher improvements (upto 2.3X) when some 7.5%) acceptable.

参考文章(20)
Swagath Venkataramani, Amit Sabne, Vivek Kozhikkottu, Kaushik Roy, Anand Raghunathan, SALSA Proceedings of the 49th Annual Design Automation Conference on - DAC '12. pp. 796- 801 ,(2012) , 10.1145/2228360.2228504
Srimat T. Chakradhar, Anand Raghunathan, Best-effort computing: re-thinking parallel software and hardware design automation conference. pp. 865- 870 ,(2010) , 10.1145/1837274.1837492
Karthik Yogendra, Mrigank Sharad, Kaushik Roy, Deliang Fan, Beyond charge-based computation: Boolean and non-Boolean computing with spin torque devices international symposium on low power electronics and design. pp. 139- 142 ,(2013) , 10.5555/2648668.2648703
Srimat Chakradhar, Murugan Sankaradas, Venkata Jakkula, Srihari Cadambi, A dynamically configurable coprocessor for convolutional neural networks Proceedings of the 37th annual international symposium on Computer architecture - ISCA '10. ,vol. 38, pp. 247- 257 ,(2010) , 10.1145/1815961.1815993
Rajamohana Hegde, Naresh R. Shanbhag, Energy-efficient signal processing via algorithmic noise-tolerance international symposium on low power electronics and design. pp. 30- 35 ,(1999) , 10.1145/313817.313834
Sung Hyun Jo, Ting Chang, Idongesit Ebong, Bhavitavya B. Bhadviya, Pinaki Mazumder, Wei Lu, Nanoscale Memristor Device as Synapse in Neuromorphic Systems Nano Letters. ,vol. 10, pp. 1297- 1301 ,(2010) , 10.1021/NL904092H
Swagath Venkataramani, Vinay K. Chippa, Srimat T. Chakradhar, Kaushik Roy, Anand Raghunathan, Quality programmable vector processors for approximate computing international symposium on microarchitecture. pp. 1- 12 ,(2013) , 10.1145/2540708.2540710
Bipin Rajendran, Yong Liu, Jae-sun Seo, Kailash Gopalakrishnan, Leland Chang, Daniel J. Friedman, Mark B. Ritter, Specifications of Nanoscale Devices and Circuits for Neuromorphic Computational Systems IEEE Transactions on Electron Devices. ,vol. 60, pp. 246- 253 ,(2013) , 10.1109/TED.2012.2227969
Andrew Y. Ng, Jiquan Ngiam, Adam Coates, Quoc V. Le, Ahbik Lahiri, Bobby Prochnow, On optimization methods for deep learning international conference on machine learning. pp. 265- 272 ,(2011)
Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition Proceedings of the IEEE. ,vol. 86, pp. 2278- 2324 ,(1998) , 10.1109/5.726791