作者: Zijian Zheng , Geoffrey I. Webb
DOI: 10.1007/BFB0095063
关键词:
摘要: Classifier committee learning methods generate multiple classifiers to form a by repeated application of single base algorithm. The members vote decide the final classification. Two such methods, Bagging and Boosting, have shown great success with decision tree learning. They create different modifying distribution training set. This paper studies approach: Stochastic Attribute Selection Committee trees. It generates classifier committees stochastically set attributes but keeping unchanged. An empirical evaluation variant this method, namely Sasc, in representative collection natural domains shows that SASC method can significantly reduce error rate On average Sasc is more accurate than less although one-tailed sign-test fails show these differences are significant at level 0.05. In addition, it found that, like Bagging, stable Boosting terms frequently obtaining higher rates C4.5 and, when raised, producing lower increases. Moreover, amenable parallel distributed processing while not.