作者: Stavros J. Perantonis , Nikolaos Ampazis , Vassilis Virvilis
关键词: Competitive learning 、 Types of artificial neural networks 、 Time delay neural network 、 Machine learning 、 Polynomial 、 Artificial neural network 、 Computer science 、 Artificial intelligence 、 Supervised learning 、 Deep learning 、 Feed forward 、 Recurrent neural network 、 Constrained optimization
摘要: Conventional supervised learning in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome incorporating additional knowledge the training formalism. In this paper, two types such are examined: Network specific (associated with network irrespectively problem whose solution sought) or (which helps to solve task). A constrained optimization framework introduced for these into We present three examples improvement behaviour using context our framework. The designed improve convergence and speed broad class feedforward networks, while third example related efficient factorization 2-D polynomials constructed sigma-pi networks.