作者: Vasant Honavar , Leonard Uhr
DOI:
关键词: Structure (mathematical logic) 、 Connectionism 、 Perception 、 Receptive field 、 Convergence (routing) 、 Computer science 、 Artificial intelligence 、 Perceptual learning
摘要: This paper presents and compares results for three types of connectionist networks on perceptual learning tasks: [A] Multi-layered converging neuron-like units, with each unit connected to a small randomly chosen subset units in the adjacent layers, that learn by re-weighting their links; [B] Networks structured into successively larger modules under brain-like topological constraints (such as layered, converging-diverging hierarchies local receptive fields) [C] structures generation-discovery, which involves growth links recruiting addition reweighting links. Preliminary empirical from simulation these recognition tasks show significant improvements using (e.g., fields, global convergence) over lack such structure; further result use generation