摘要: We investigate improvements of AdaBoost that can exploit the fact weak hypotheses are one-sided, i.e. either all its positive (or negative) predictions correct. In particular, for any set m labeled examples consistent with a disjunction k literals (which one-sided in this case), constructs hypothesis by using O(k2 logm) iterations. On other hand, greedy covering algorithm finds size O(k log m). Our primary question is whether there simple boosting performs as well covering. We first show InfoBoost, modification proposed Aslam different purpose, does perform algorithm. then requires Ω(k2 m) iterations learning k-literal disjunctions. achieve an adversary construction and experiments based on artificial data. Further we give variant called SemiBoost handle degenerate case when given have same label. conclude showing be used to produce small conjunctions well.