On Weight-Noise-Injection Training

作者: Kevin Ho , Chi-sing Leung , John Sum

DOI: 10.1007/978-3-642-03040-6_112

关键词:

摘要: While injecting weight noise during training has been proposed for more than a decade to improve the convergence, generalization and fault tolerance of neural network, not much theoretical work done its convergence proof objective function that it is minimizing. By applying Gladyshev Theorem, shown an RBF network almost sure. Besides, corresponding essentially mean square errors (MSE). This indicates radial basis (RBF) able tolerance. Despite this technique effectively applied multilayer perceptron, further analysis on expected update equation MLP with injection presented. The performance difference between these two models by discussed.

参考文章(28)
Nait Charif Hammadi, Hideo Ito, A learning algorithm for fault tolerant feedforward neural networks IEICE Transactions on Information and Systems. ,vol. 80, pp. 21- 27 ,(1997)
John Sum, Chi-sing Leung, Kevin Ho, On Node-Fault-Injection Training of an RBF Network international conference on neural information processing. pp. 324- 331 ,(2009) , 10.1007/978-3-642-03040-6_40
Jose L. Bernier, J. Ortega, I. Rojas, E. Ros, A. Prieto, Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization Neural Processing Letters. ,vol. 12, pp. 107- 113 ,(2000) , 10.1023/A:1009698206772
Ching-Tai Chiu, K. Mehrotra, C.K. Mohan, S. Ranka, Modifying training algorithms for improved fault tolerance world congress on computational intelligence. ,vol. 1, pp. 333- 338 ,(1994) , 10.1109/ICNN.1994.374185
Naotake Kamiura, Teijiro Isokawa, Kazuharu Yamato, Yutaka Hata, Nobuyuki Matsui, On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks IEICE Transactions on Information and Systems. ,vol. 83, pp. 1931- 1939 ,(2000)
P. Chandra, Y. Singh, Fault tolerance of feedforward artificial neural networks- a framework of study international joint conference on neural network. ,vol. 1, pp. 489- 494 ,(2003) , 10.1109/IJCNN.2003.1223395
Kam-Chuen Jim, C.L. Giles, B.G. Horne, An analysis of noise in recurrent neural networks: convergence and generalization IEEE Transactions on Neural Networks. ,vol. 7, pp. 1424- 1438 ,(1996) , 10.1109/72.548170
Yves Grandvalet, Stéphane Canu, Stéphane Boucheron, Noise injection: theoretical prospects Neural Computation. ,vol. 9, pp. 1093- 1108 ,(1997) , 10.1162/NECO.1997.9.5.1093
John Sum, Chi-sing Leung, Lipin Hsu, Fault tolerant learning using Kullback-Leibler divergence ieee region 10 conference. pp. 1- 4 ,(2007) , 10.1109/TENCON.2007.4429073
Salvatore Cavalieri, Orazio Mirabella, A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks Neural Networks. ,vol. 12, pp. 91- 106 ,(1999) , 10.1016/S0893-6080(98)00094-X