作者: S. Shah , F. Palmieri
DOI: 10.1109/IJCNN.1990.137822
关键词:
摘要: It is noted that the training of feedforward networks using conventional backpropagation algorithm plagued by poor convergence and misadjustment. The authors introduce multiple extended Kalman (MEKA) to train networks. based on idea partitioning global problem finding weights into a set manageable nonlinear subproblems. local at neuron level. superiority MEKA over in terms quality solution obtained two benchmark problems demonstrated. superior performance can be attributed localized approach. In fact, nonconvex nature surface reduces chances getting trapped minima