Meta learning evolutionary artificial neural networks

作者: Ajith Abraham , None

DOI: 10.1016/S0925-2312(03)00369-2

关键词:

摘要: Abstract In this paper, we present meta-learning evolutionary artificial neural network (MLEANN), an automatic computational framework for the adaptive optimization of networks (ANNs) wherein architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to problem. We explored performance MLEANN conventionally designed ANNs function approximation problems. To evaluate comparative performance, used three different well-known chaotic time series. also state-of-the-art popular algorithms some experimentation results related convergence speed generalization performance. backpropagation algorithm; conjugate gradient algorithm, quasi-Newton Levenberg–Marquardt Performances were evaluated when functions architecture changed. further theoretical background, design strategy demonstrate how effective inevitable is proposed a network, which smaller, faster with better

参考文章(88)
E. J. W. Boers, M. V. Borst, I. G. Sprinkhuizen-Kuyper, Evolving Artificial Neural Networks using the "Baldwin Effect" † ,(1995)
Jonathan Baxter, The evolution of learning algorithms for artificial neural networks Complex Systems. pp. 313- 326 ,(1993)
Okyay Kaynak, Lotfi A Zadeh, Burhan Türksen, Imre J Rudas, Computational Intelligence: Soft Computing and Fuzzy-Neuro Integration with Applications ,(2012)
Jürgen Branke, Udo Kohlmorgen, Hartmut Schmeck, A Distributed Genetic Algorithm Improving the Generalization Behavior of Neural Networks european conference on machine learning. pp. 107- 121 ,(1995) , 10.1007/3-540-59286-5_52
J. Periaux, M. Galan, P. Cuesta, Gabriel Winter, Genetic Algorithms in Engineering and Computer Science ,(1996)
Heinrich Braun, Peter Zagorski, ENZO-M - A Hybrid Approach for Optimizing Neural Networks by Evolution and Learning parallel problem solving from nature. pp. 440- 451 ,(1994) , 10.1007/3-540-58484-6_287