Very Fast Online Learning of Highly Non Linear Problems

作者: Aggelos Chariatis

DOI:

关键词:

摘要: The experimental investigation on the efficient learning of highly non-linear problems by online training, using ordinary feed forward neural networks and stochastic gradient descent errors computed back-propagation, gives evidence that most crucial factors for training are hidden units' differentiation, attenuation interference selective attention parts where approximation error remains high. In this report, we present global local techniques a new hybrid activation function enables units to acquire individual receptive fields which may be or depending problem's complexities. presented enable very complex classification with embedded subproblems.

参考文章(40)
Kai Ming Ting, Zijian Zheng, Improving the Performance of Boosting for Naive Bayesian Classification pacific asia conference on knowledge discovery and data mining. pp. 296- 305 ,(1999) , 10.1007/3-540-48912-6_41
Robert E. Schapire, A brief introduction to boosting international joint conference on artificial intelligence. ,vol. 2, pp. 1401- 1406 ,(1999)
Robert E. Schapire, Yoav Freund, Wee Sun Lee, Peter Barlett, Boosting the margin: A new explanation for the effectiveness of voting methods international conference on machine learning. pp. 322- 330 ,(1997)
Nicol N. Schraudolph, Online Local Gain Adaptation for Multi-Layer Perceptrons Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale. ,(1998)
Nicol N. Schraudolph, Fast Second-Order Gradient Descent via O(n) Curvature Matrix-Vector Products Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale. ,(2000)
Nicol N. Schraudolph, Centering Neural Network Gradient Factors neural information processing systems. pp. 207- 226 ,(1998) , 10.1007/978-3-642-35289-8_14