MEKA-a fast, local algorithm for training feedforward neural networks

作者: S. Shah , F. Palmieri

DOI: 10.1109/IJCNN.1990.137822

关键词:

摘要: It is noted that the training of feedforward networks using conventional backpropagation algorithm plagued by poor convergence and misadjustment. The authors introduce multiple extended Kalman (MEKA) to train networks. based on idea partitioning global problem finding weights into a set manageable nonlinear subproblems. local at neuron level. superiority MEKA over in terms quality solution obtained two benchmark problems demonstrated. superior performance can be attributed localized approach. In fact, nonconvex nature surface reduces chances getting trapped minima

参考文章(7)
R.S. Scalero, N. Tepedelenlioglu, A fast new algorithm for training feedforward neural networks IEEE Transactions on Signal Processing. ,vol. 40, pp. 202- 210 ,(1992) , 10.1109/78.157194
M.R. Azimi-Sadjadi, S. Citrin, S. Sheedvash, Supervised learning process of multi-layer perceptron neural networks using fast least squares International Conference on Acoustics, Speech, and Signal Processing. pp. 1381- 1384 ,(1990) , 10.1109/ICASSP.1990.115644
F. Palmieri, S.A. Shah, A new algorithm for training multilayer perceptrons systems man and cybernetics. pp. 427- 428 ,(1989) , 10.1109/ICSMC.1989.71330
D.E. RUMELHART, G.E. HINTON, R.J. WILLIAMS, Learning internal representations by error propagation Neurocomputing: foundations of research. pp. 673- 695 ,(1988) , 10.1016/B978-1-4832-1446-7.50035-2
Raymond L. Watrous, Learning Algorithms for Connectionist Networks: Applied Gradient Methods of Nonlinear Optimization Proceedings of the IEEE First International Conference on Neural Networks. ,vol. 2, pp. 619- 627 ,(1988)
John B. Moore, Brian D. O. Anderson, Mansour Eslami, Optimal Filtering IEEE Transactions on Systems, Man, and Cybernetics. ,(1982)