Accelerating artificial neural network computations by skipping input values

作者: Delerse Sebastien , Chouta Taoufik , Larzul Ludovic , Vangel Benoit Chappet De

DOI:

关键词:

摘要: Systems and methods for accelerating artificial neural network computation are disclosed. An example may comprise selecting, by a controller communicatively coupled to selector an arithmetic unit based on criterion, input value from the stream of values neuron, configuring, controller, provide, dynamically, selected unit, providing, information value, acquiring, information, weight set weights, performing, mathematical operation obtain result, wherein result is be used compute output neuron. The criterion include comparison between reference value. zero.

参考文章(10)
Ravi Narayanaswami, Dong Hyuk Woo, Exploiting input data sparsity in neural network compute units ,(2016)
Ould-Ahmed-Vall Elmoustapha, Brown William M, Systems, apparatuses, and methods for cumulative product ,(2018)
William J. Dally, Franciscus Wilhelmus Sijstermans, Jeffrey Michael Pool, Xiaojun Wang, Liang Chen, Zhou Yan, Yuanzhi Hua, Data compaction and memory bandwidth reduction for sparse neural networks ,(2018)
Kalsi Gurpreet S, Mishra Amit, Pillai Kamlesh, DEEP NEURAL NETWORK ARCHITECTURE USING PIECEWISE LINEAR APPROXIMATION ,(2020)