作者: Kevin Ho , Chi-sing Leung , John Sum
DOI: 10.1007/978-3-642-03040-6_112
关键词:
摘要: While injecting weight noise during training has been proposed for more than a decade to improve the convergence, generalization and fault tolerance of neural network, not much theoretical work done its convergence proof objective function that it is minimizing. By applying Gladyshev Theorem, shown an RBF network almost sure. Besides, corresponding essentially mean square errors (MSE). This indicates radial basis (RBF) able tolerance. Despite this technique effectively applied multilayer perceptron, further analysis on expected update equation MLP with injection presented. The performance difference between these two models by discussed.