作者: Aggelos Chariatis
DOI:
关键词:
摘要: The experimental investigation on the efficient learning of highly non-linear problems by online training, using ordinary feed forward neural networks and stochastic gradient descent errors computed back-propagation, gives evidence that most crucial factors for training are hidden units' differentiation, attenuation interference selective attention parts where approximation error remains high. In this report, we present global local techniques a new hybrid activation function enables units to acquire individual receptive fields which may be or depending problem's complexities. presented enable very complex classification with embedded subproblems.