作者: Adam H Marblestone , Greg Wayne , Konrad P Kording , None
关键词:
摘要: Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial networks tend to eschew precisely designed or circuits in favor brute force optimization a cost function, often using simple relatively uniform initial architectures. Two recent developments have emerged within learning that create an opportunity connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion various forms short- long-term memory storage. Second, functions training procedures become more complex varied across layers over time. Here we think about brain terms ideas. We hypothesize (1) optimizes functions, (2) diverse differ locations development, (3) operates pre-structured architecture matched computational problems posed by behavior. support hypotheses, argue range implementations credit assignment through multiple neurons compatible with our current knowledge circuitry, brain's specialized can be interpreted as enabling efficient specific problem classes. Such heterogeneously optimized system, enabled series interacting serves make data-efficient targeted needs organism. suggest directions which neuroscience could seek refine test hypotheses.