A Short Introduction to Boosting

作者: Yoav Freund , Robert Schapire , Naoki Abe

DOI:

关键词:

摘要: Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces boosting algorithm AdaBoost, and explains underlying theory boosting, including an explanation why often does not suffer from overfitting as well boosting’s relationship to support-vector machines. Some examples recent applications are also described.

参考文章(47)
Steven L. Salzberg, Alberto Segre, Programs for Machine Learning ,(1994)
Masahiko Haruno, Satoshi Shirai, Yoshifumi Ooyama, Using Decision Trees to Construct a Practical Parser Machine Learning. ,vol. 34, pp. 131- 149 ,(1999) , 10.1023/A:1007597902467
Umesh V. Vazirani, Michael J. Kearns, An Introduction to Computational Learning Theory ,(1994)
Robert E. Schapire, Steven Abney, Yoram Singer, Boosting Applied to Tagging and PP Attachment empirical methods in natural language processing. ,(1999)
William W. Cohen, Yoram Singer, A simple, fast, and effective rule learner national conference on artificial intelligence. pp. 335- 342 ,(1999)
Robert E. Schapire, Using output codes to boost multiclass learning problems international conference on machine learning. pp. 313- 321 ,(1997)
William W. Cohen, Fast Effective Rule Induction Machine Learning Proceedings 1995. pp. 115- 123 ,(1995) , 10.1016/B978-1-55860-377-6.50023-2
T. G. Dietterich, G. Bakiri, Solving multiclass learning problems via error-correcting output codes Journal of Artificial Intelligence Research. ,vol. 2, pp. 263- 286 ,(1994) , 10.1613/JAIR.105
Richard Maclin, David Opitz, An empirical evaluation of bagging and boosting national conference on artificial intelligence. pp. 546- 551 ,(1997)