Training feedforward neural networks: an algorithm giving improved generalization

作者: Charles W. Lee

DOI: 10.1016/S0893-6080(96)00071-8

关键词:

摘要: Abstract An algorithm is derived for supervised training in multilayer feedforward neural networks. Relative to the gradient descent backpropagation it appears give both faster convergence and improved generalization, whilst preserving system of backpropagating errors through network. Copyright © 1996 Elsevier Science Ltd.

参考文章(16)
Merten Joost, Wolfram Schiffmann, Randolf Werner, Comparison of optimized backpropagation algorithms. the european symposium on artificial neural networks. ,(1993)
T. J. Sejnowski, Parallel networks that learn to pronounce English text Complex Systems. ,vol. 1, pp. 145- 168 ,(1987)
David E. Rumelhart, James L. McClelland, , Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations Computational Models of Cognition and Perception. ,(1986) , 10.7551/MITPRESS/5236.001.0001
W. H. Wolberg, O. L. Mangasarian, Multisurface Method of Pattern Separation for Medical Diagnosis Applied to Breast Cytology Proceedings of the National Academy of Sciences of the United States of America. ,vol. 87, pp. 9193- 9196 ,(1990) , 10.1073/PNAS.87.23.9193
G.C. Lendaris, I.A. Harb, Improved generalization in ANNs via use of conceptual graphs: a character recognition task as an example case international joint conference on neural network. pp. 551- 556 ,(1990) , 10.1109/IJCNN.1990.137624
E.D. Karnin, A simple procedure for pruning back-propagation trained neural networks IEEE Transactions on Neural Networks. ,vol. 1, pp. 239- 242 ,(1990) , 10.1109/72.80236