作者: Lin Li , Rustam Stolkin , Licheng Jiao , Fang Liu , Shuang Wang
DOI: 10.1016/J.PATCOG.2014.04.015
关键词: Ensemble learning 、 Machine learning 、 Training set 、 Linear system 、 Artificial intelligence 、 Pattern recognition 、 Compressed sensing 、 Classifier (UML) 、 Random subspace method 、 Cascading classifiers 、 Fitness proportionate selection 、 Computer science
摘要: Abstract This paper presents a method for improved ensemble learning, by treating the optimization of an classifiers as compressed sensing problem. Ensemble learning methods improve performance learned predictor integrating weighted combination multiple predictive models. Ideally, number models needed in should be minimized, while optimizing weights associated with each included model. We solve this problem it example problem, which sparse solution must reconstructed from under-determined linear system. Compressed techniques are then employed to find is both small and effective. An additional contribution paper, present new evaluation (a pairwise diversity measurement) called roulette-wheel kappa-error. takes into account different weightings classifiers, also reduces total pairs kappa-error diagram, selecting through selection according classifiers. approach can greatly clarity informativeness especially when large. use 25 public data sets evaluate compare ensembles using four reconstruction algorithms, combined two classifier algorithms training manipulation techniques. give comparison experiments our against another five state-of-the-art pruning methods. These show that produces comparable or better accuracy, being significantly faster than compared