摘要: Recent empirical work has shown that combining predictors can lead to significant reduction in generalization error. The individual (weak learners) be very simple, such as two terminal-node trees; it is the aggregating scheme gives them power of increasing prediction accuracy. Unfortunately, many methods do not improve nearest neighbor (NN) classifiers at all. This because NN are robust with respect variations a data set. In contrast, they sensitive input features. We exploit instability different choices features generate an effective and diverse set possibly uncorrelated errors. Interestingly, approach takes advantage high dimensionality data. experimental results show our technique offers performance improvements competitive methods.