作者: Peter Auer , Claudio Gentile
DOI:
关键词: Unsupervised learning 、 Computational learning theory 、 Algorithm 、 Stability (learning theory) 、 Ensemble learning 、 Instance-based learning 、 Computer science 、 Mathematical optimization 、 Empirical risk minimization 、 Learning classifier system 、 Semi-supervised learning
摘要: Most of the performance bounds for on-line learning algorithms are proven assuming a constant rate. To optimize these bounds, rate must be tuned based on quantities that generally unknown, as they depend whole sequence examples. In this paper we show essentially same optimized can obtained when adaptively tune their rates examples in progressively revealed. Our adaptive apply to wide class algorithms, including p-norm generalized linear regression and Weighted Majority with absolute loss. We emphasize our tunings radically different from previous techniques, such so-called doubling trick. Whereas trick restarts algorithm several times using each run, methods save information by changing value very smoothly. fact, over finite set experts analysis provides better leading than