作者: Karthik Balakrishnan , Vasant Honavar
DOI: 10.1016/B978-0-444-89488-5.50039-7
关键词: Sigmoid function 、 Algorithm 、 Speedup 、 Computer science 、 Backpropagation 、 Benchmark (computing) 、 Convergence (routing) 、 Learning rule 、 Artificial neural network 、 Set (abstract data type)
摘要: Abstract Back-propagation (BP) [9, 5] is one of the most widely used procedures for training multi-layer artificial neural networks with sigmoid units. Though successful in a number applications, its convergence to set desired weights can be excruciatingly slow. Several modifications have been proposed improving learning speed [2, 4, 8, 1, 6]. The phenomenon flat-spots known play significant role slow BP [2]. formulation Learning rule prevents network from effectively presence flat-spots. In this paper we propose new approach minimize error such that occurring output layer are appropriately handled, thereby permitting learn even improvement provided by technique demonstrated on standard benchmark data-sets. More importantly, speedup obtained little or no increase computational requirements each iteration.