作者: O Kinouchi , N Caticha
DOI: 10.1088/0305-4470/25/23/020
关键词: Artificial neural network 、 Algorithm 、 Iterated function 、 Upper and lower bounds 、 Generalization 、 Function (mathematics) 、 Mathematics 、 Stability (learning theory) 、 Perceptron 、 Weight function
摘要: A new learning algorithm for the one-layer perceptron is presented. It aims to maximize generalization gain per example. Analytical results are obtained case of single presentation each The weight attached a Hebbian term function expected stability example in teacher perceptron. This leads obtention upper bounds ability. scheme can be iterated and numerical simulations show that it converges, within errors, theoretical optimal ability Bayes algorithm. an with maximized strategy selection examples proved that, as expected, orthogonal optimal. Exponential decay error selected examples.