Theory and applications of competitive prediction

作者: Fedor Zhdanov

DOI:

关键词:

摘要: Predicting the future is an important purpose of machine learning research. In online learning, predictions are given sequentially rather than all at once. People wish to make sensible decisions in many situations everyday life, whether month-by-month, day-by-day, or minute-by-minute. competitive prediction, made by a set experts and learner. The quality measured loss function. goal learner reliable under any circumstances. compares his with best from ensures that performance not much worse. this thesis general methodology described provide algorithms strong guarantees for prediction problems. Specific attention paid square function, widely used assess predictions. Four types sets considered thesis: finite number free (which required follow strategy), following strategies finite-dimensional spaces, infinite-dimensional Hilbert Banach spaces. power illustrated derivations various algorithms. Two core approaches explored Aggregating Algorithm Defensive Forecasting. These close each other interesting cases. However, Forecasting more covers some problems which cannot be solved using Algorithm. specific often computationally efficient. empirical properties new validated on artificial real world data sets. areas where can applied emphasized.

参考文章(98)
Christophe Andrieu, Nando De Freitas, Arnaud Doucet, Michael I Jordan, None, An introduction to MCMC for machine learning Machine Learning. ,vol. 50, pp. 5- 43 ,(2003) , 10.1023/A:1020281327116
Dean P. Foster, Rakesh Vohra, Regret in the On-Line Decision Problem Games and Economic Behavior. ,vol. 29, pp. 7- 35 ,(1999) , 10.1006/GAME.1999.0740
Volodya Vovk, None, Competitive On-line Statistics International Statistical Review. ,vol. 69, pp. 213- 248 ,(2001) , 10.1111/J.1751-5823.2001.TB00457.X
D. Haussler, J. Kivinen, M.K. Warmuth, Sequential prediction of individual sequences under general loss functions IEEE Transactions on Information Theory. ,vol. 44, pp. 1906- 1925 ,(1998) , 10.1109/18.705569
Martin Zinkevich, Online convex programming and generalized infinitesimal gradient ascent international conference on machine learning. pp. 928- 935 ,(2003)
J. Kivinen, A.J. Smola, R.C. Williamson, Online learning with kernels IEEE Transactions on Signal Processing. ,vol. 52, pp. 2165- 2176 ,(2004) , 10.1109/TSP.2004.830991
Sham M. Kakade, Matthias W. Seeger, Dean P. Foster, Worst-Case Bounds for Gaussian Process Models neural information processing systems. ,vol. 18, pp. 619- 626 ,(2005)
Chih-Chung Chang, Chih-Jen Lin, LIBSVM ACM Transactions on Intelligent Systems and Technology. ,vol. 2, pp. 1- 27 ,(2011) , 10.1145/1961189.1961199
Yuri Kalnishkan, Alex Gammerman, Vladimir Vovk, On-line prediction with kernels and the complexity approximation principle uncertainty in artificial intelligence. pp. 170- 176 ,(2004) , 10.5555/1036843.1036864