Mutual information and k-nearest neighbors approximator for time series prediction

作者: Antti Sorjamaa , Amaury Lendasse , Jin Hao

DOI: 10.1007/11550907_87

关键词:

摘要: This paper presents a method that combines Mutual Information and k-Nearest Neighbors approximator for time series prediction. is used input selection. K-Nearest to improve the selection provide simple but accurate prediction method. Due its simplicity repeated build large number of models are long-term series. The Santa Fe A as an example.

参考文章(8)
Christopher M. Bishop, Neural networks for pattern recognition ,(1995)
Eric P. Xing, Richard M. Karp, Michael I. Jordan, Feature selection for high-dimensional genomic microarray data international conference on machine learning. pp. 601- 608 ,(2001)
Ron Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection international joint conference on artificial intelligence. ,vol. 2, pp. 1137- 1143 ,(1995)
Howard Hua Yang, Shun-ichi Amari, Adaptive online learning algorithms for blind separation: maximum entropy and minimum mutual information Neural Computation. ,vol. 9, pp. 1457- 1482 ,(1997) , 10.1162/NECO.1997.9.7.1457
Alexander Kraskov, Harald Stögbauer, Peter Grassberger, Estimating mutual information. Physical Review E. ,vol. 69, pp. 066138- ,(2004) , 10.1103/PHYSREVE.69.066138
N. Kwak, Chong-Ho Choi, Input feature selection for classification problems IEEE Transactions on Neural Networks. ,vol. 13, pp. 143- 159 ,(2002) , 10.1109/72.977291
AntoniaJ. Jones, New tools in non-linear modelling and prediction Computational Management Science. ,vol. 1, pp. 109- 149 ,(2004) , 10.1007/S10287-003-0006-1
D. Zongker, A. Jain, Algorithms for feature selection: An evaluation international conference on pattern recognition. ,vol. 2, pp. 18- 22 ,(1996) , 10.1109/ICPR.1996.546716