作者: LEONARD UHR
DOI: 10.1080/09540099008915668
关键词: Convergence (routing) 、 Artificial neural network 、 Computer science 、 Matching (statistics) 、 Divergence (computer science) 、 Dilemma 、 Connectionism 、 Artificial intelligence 、 Variety (cybernetics) 、 Simple (philosophy)
摘要: A crucial dilemma is how to increase the power of connectionist networks (CN), since simply increasing size today's relatively small CNs often slows down and worsens learning performance. There are three possible ways: (1) use more powerful structures; (2) amount stored information, variety basic processes; (3) have network modify itself (learn, evolve) in ways. Today's only a few many topological structures, handle numerical values using very simple processes, learn by modifying weights associated with links. This paper examines great potentially muck possibilities, focusing on what appear be most promising: appropriate brain-like structures (e.g. local connectivity, global convergence divergence); matching, symbol-handling, list-manipulating capabilities; extraction-generation-discovery.