Acceleration of backpropagation learning using optimised learning rate and momentum

作者: X-H Yu , None

DOI: 10.1049/EL:19930860

关键词: Estimation theorySimulationComputer scienceAlgorithmArtificial neural networkOptimal estimationAcceleration (differential geometry)BackpropagationMomentum (technical analysis)

摘要: Learning rate and momentum factor are two arbitrary parameters that have to be carefully chosen in the conventional backpropagation (BP) learning algorithm. Based on a linear expansion of actual outputs BP network with respect parameters, Letter presents an efficient approach determine dynamically optimal values these parameters. Simulation results indicate present can provide remarkable improvement convergence performance.

参考文章(2)
David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams, Learning representations by back-propagating errors Nature. ,vol. 323, pp. 696- 699 ,(1988) , 10.1038/323533A0
D.R. Hush, B.G. Horne, Progress in supervised neural networks IEEE Signal Processing Magazine. ,vol. 10, pp. 8- 39 ,(1993) , 10.1109/79.180705