A Learning Framework for Neural Networks Using Constrained Optimization Methods

作者: Stavros J. Perantonis , Nikolaos Ampazis , Vassilis Virvilis

DOI: 10.1023/A:1019240304484

关键词: Competitive learningTypes of artificial neural networksTime delay neural networkMachine learningPolynomialArtificial neural networkComputer scienceArtificial intelligenceSupervised learningDeep learningFeed forwardRecurrent neural networkConstrained optimization

摘要: Conventional supervised learning in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome incorporating additional knowledge the training formalism. In this paper, two types such are examined: Network specific (associated with network irrespectively problem whose solution sought) or (which helps to solve task). A constrained optimization framework introduced for these into We present three examples improvement behaviour using context our framework. The designed improve convergence and speed broad class feedforward networks, while third example related efficient factorization 2-D polynomials constructed sigma-pi networks.

参考文章(22)
Ronald R. Yager, Dimitar P. Filev, Generation of Fuzzy Rules by Mountain Clustering Journal of Intelligent and Fuzzy Systems. ,vol. 2, pp. 209- 219 ,(1994) , 10.3233/IFS-1994-2301
Stavros Perantonis, Nikolaos Ampazis, Stavros Varoufakis, George Antoniou, Constrained Learning in Neural Networks: Application to Stable Factorization of 2-D Polynomials Neural Processing Letters. ,vol. 7, pp. 5- 14 ,(1998) , 10.1023/A:1009655902122
Stavros J Perantonis, Dimitris A Karras, None, An efficient constrained learning algorithm with momentum acceleration Neural Networks. ,vol. 8, pp. 237- 249 ,(1995) , 10.1016/0893-6080(94)00067-V
A. E. Bryson, W. F. Denham, A Steepest-Ascent Method for Solving Optimum Programming Problems Journal of Applied Mechanics. ,vol. 29, pp. 247- 257 ,(1962) , 10.1115/1.3640537
Theodore B. Trafalis, Nicolas P. Couellan, Neural network training via an affine scaling quadratic optimization algorithm Neural Networks. ,vol. 9, pp. 475- 481 ,(1996) , 10.1016/0893-6080(95)00108-5
Nikolaos Ampazis, S.J. Perantonis, J.G. Taylor, Dynamics of multilayer networks in the vicinity of temporary minima Neural Networks. ,vol. 12, pp. 43- 58 ,(1999) , 10.1016/S0893-6080(98)00103-8
Erik M Johansson, Farid U Dowla, Dennis M Goodman, None, BACKPROPAGATION LEARNING FOR MULTILAYER FEED-FORWARD NEURAL NETWORKS USING THE CONJUGATE GRADIENT METHOD International Journal of Neural Systems. ,vol. 02, pp. 291- 301 ,(1991) , 10.1142/S0129065791000261
S. S. Rao, R. C. Desai, Optimization Theory and Applications IEEE Transactions on Systems, Man, and Cybernetics. ,vol. 10, pp. 280- 280 ,(1980) , 10.1109/TSMC.1980.4308490
Anders Krogh, C. I. Thorbergsson, John A. Hertz, A Cost Function for Internal Representations neural information processing systems. ,vol. 2, pp. 733- 740 ,(1989)