作者: Nedjeljko Peric , Ivan Petrovic , Mato Baotic
DOI:
关键词:
摘要: A cascade correlation learning network (CCLN) is a popular supervised architecture that gradually grows the hidden neurons of fixed nonlinear activation functions, adding one-by-one neuron in during course training. Because functions cascaded connections from existing to new candidate are required approximate high-order nonlinearity. The major drawback CCLN error surface very zigzag and unsmooth due use maximum criterion consistently pushes their saturated extreme values instead active region. To alleviate this original two cascadecorrelation networks (CCLNS1 CCLNS2) proposed, which enable smoothing surface. Smoothing performed by (re)training gains neurons’ functions. In CCLNS1 smothing enabled using sign outputs CCLNS2 each has functions: one for connections, trainable connection output layer. performances structures tested them three Both proposed exhibit much better than CCLN, while gives little bit results CCLNS2.