A Cascade-Correlation Learning Network with Smoothing.

作者: Nedjeljko Peric , Ivan Petrovic , Mato Baotic

DOI:

关键词:

摘要:  A cascade correlation learning network (CCLN) is a popular supervised architecture that gradually grows the hidden neurons of fixed nonlinear activation functions, adding one-by-one neuron in during course training. Because functions cascaded connections from existing to new candidate are required approximate high-order nonlinearity. The major drawback CCLN error surface very zigzag and unsmooth due use maximum criterion consistently pushes their saturated extreme values instead active region. To alleviate this original two cascadecorrelation networks (CCLNS1 CCLNS2) proposed, which enable smoothing surface. Smoothing performed by (re)training gains neurons’ functions. In CCLNS1 smothing enabled using sign outputs CCLNS2 each has functions: one for connections, trainable connection output layer. performances structures tested them three Both proposed exhibit much better than CCLN, while gives little bit results CCLNS2.

参考文章(13)
Nedjeljko Peric, Ivan Petrovic, Mato Baotic, An Efficient Newton-type Learning Algorithm for MLP Neural Networks. Natural Computing. pp. 551- 558 ,(1998)
Ying Zhao, C.G. Atkeson, Implementing projection pursuit learning IEEE Transactions on Neural Networks. ,vol. 7, pp. 362- 373 ,(1996) , 10.1109/72.485672
T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, D. L. Alkon, Accelerating the convergence of the back-propagation method Biological Cybernetics. ,vol. 59, pp. 257- 263 ,(1988) , 10.1007/BF00332914
Human control strategy: abstraction, verification, and replication IEEE Control Systems Magazine. ,vol. 17, pp. 48- 61 ,(1997) , 10.1109/37.621469
Jerome H. Friedman, Werner Stuetzle, Projection Pursuit Regression Journal of the American Statistical Association. ,vol. 76, pp. 817- 823 ,(1981) , 10.1080/01621459.1981.10477729
Jeng-Neng Hwang, Shyh-Rong Lay, M. Maechler, R.D. Martin, J. Schimert, Regression modeling in back-propagation and projection pursuit learning IEEE Transactions on Neural Networks. ,vol. 5, pp. 342- 353 ,(1994) , 10.1109/72.286906
G. Cybenko, Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems. ,vol. 2, pp. 303- 314 ,(1989) , 10.1007/BF02551274
Scott E. Fahlman, Christian Lebiere, The Cascade-Correlation Learning Architecture neural information processing systems. ,vol. 2, pp. 524- 532 ,(1989)
M. Hoehfeld, S.E. Fahlman, Learning with limited numerical precision using the cascade-correlation algorithm IEEE Transactions on Neural Networks. ,vol. 3, pp. 602- 611 ,(1992) , 10.1109/72.143374
Kurt Hornik, Maxwell Stinchcombe, Halbert White, Multilayer feedforward networks are universal approximators Neural Networks. ,vol. 2, pp. 359- 366 ,(1989) , 10.1016/0893-6080(89)90020-8