Toward an Integration of Deep Learning and Neuroscience.

作者: Adam H Marblestone , Greg Wayne , Konrad P Kording , None

DOI: 10.3389/FNCOM.2016.00094

关键词:

摘要: Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial networks tend to eschew precisely designed or circuits in favor brute force optimization a cost function, often using simple relatively uniform initial architectures. Two recent developments have emerged within learning that create an opportunity connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion various forms short- long-term memory storage. Second, functions training procedures become more complex varied across layers over time. Here we think about brain terms ideas. We hypothesize (1) optimizes functions, (2) diverse differ locations development, (3) operates pre-structured architecture matched computational problems posed by behavior. support hypotheses, argue range implementations credit assignment through multiple neurons compatible with our current knowledge circuitry, brain's specialized can be interpreted as enabling efficient specific problem classes. Such heterogeneously optimized system, enabled series interacting serves make data-efficient targeted needs organism. suggest directions which neuroscience could seek refine test hypotheses.

参考文章(476)
Geoffrey E. Hinton, Alex Krizhevsky, Sida D. Wang, Transforming auto-encoders international conference on artificial neural networks. pp. 44- 51 ,(2011) , 10.1007/978-3-642-21735-7_6
Paul J. Werbos, Applications of advances in nonlinear sensitivity analysis System Modeling and Optimization. pp. 762- 770 ,(1982) , 10.1007/BFB0006203
Peter Dayan, Richard S. Zemel, Combining probabilistic population codes international joint conference on artificial intelligence. pp. 1114- 1119 ,(1997)
Marvin Minsky, Plain talk about neurodevelopmental epistemology international joint conference on artificial intelligence. pp. 1083- 1092 ,(1977)
Zoubin Ghahramani, Iain Murray, A note on the evidence and Bayesian Occam's razor Gatsby Computational Neuroscience Unit, University College London (UCL). ,(2005)
Ilya Sutskever, Geoffrey Hinton, James Martens, George Dahl, On the importance of initialization and momentum in deep learning international conference on machine learning. pp. 1139- 1147 ,(2013)
Daniel Harari, Shimon Ullman, Joshua B. Tenenbaum, Tao Gao, When Computer Vision Gazes at Cognition arXiv: Artificial Intelligence. ,(2014)
Nancy Kanwisher, Elinor McKone, Kate Crookes, The Cognitive and Neural Development of Face Recognition in Humans MIT Press. pp. 467- 482 ,(2009)
Ilya Sutskever, Geoffrey E. Hinton, James Martens, Generating Text with Recurrent Neural Networks international conference on machine learning. pp. 1017- 1024 ,(2011)