作者: Peter U. Diehl , Daniel Neil , Jonathan Binas , Matthew Cook , Shih-Chii Liu
DOI: 10.1109/IJCNN.2015.7280696
关键词: Machine learning 、 Spiking neural network 、 MNIST database 、 Neuromorphic engineering 、 Robustness (computer science) 、 Deep belief network 、 Computer science 、 Deep learning 、 Rectifier (neural networks) 、 Artificial neural network 、 Artificial intelligence
摘要: Deep neural networks such as Convolutional Networks (ConvNets) and Belief (DBNs) represent the state-of-the-art for many machine learning computer vision classification problems. To overcome large computational cost of deep networks, spiking have recently been proposed, given specialized hardware now available (SNNs). However, this has come at performance losses due to conversion from analog (ANNs) without a notion time, sparsely firing, event-driven SNNs. Here we analyze effects converting ANNs into SNNs with respect choice parameters neurons firing rates thresholds. We present set optimization techniques minimize loss in process ConvNets fully connected networks. These yield that outperform all previous on MNIST database date, here are close maximum after only 20 ms simulated time. The include using rectified linear units (ReLUs) zero bias during training, new weight normalization method help regulate rates. Our an ANN SNN enables low-latency high accuracies already first output spike, compared approaches it yields improved increased training presented analysis boost value attractive framework neuromorphic computing platforms aimed fast efficient pattern recognition.