A Note on Parameter Tuning for On-Line Shifting Algorithms

作者: O Bousquet

DOI:

关键词:

摘要: In this short note, building on ideas of M. Herbster [2] we propose a method for automatically tuning the parameter FIXEDSHARE algorithm proposed by and Warmuth [3] in context on-line learning with shifting experts. We show that can be done memory requirement O(nT ) additional loss incurred is same as estimating Bernoulli random variable. 1. SETTING How setting [3]. consider n experts at each time period t = 1, . , T they make predictions incur L(t, i) (i being index expert) which model here negative log-likelihood, so probability expert i makes correct prediction e. Bayesian will use to motivate updates. step one supposed better than other ones (i.e. having smaller loss) best changes from time. This so-called framework. denote yt observed outcome et t. Et variable models index. Boldface notation corresponds sequences, e.g. y1, yt. 2. FIXED α now way (or shifts). via following probabilistic P (et|et−1) { 1− if et−1 n−1 otherwise (e1) 1 means initially all are equally likely then next previous 1−α any (equally likely) 2 OLIVIER BOUSQUET α. thus get prior over sequences k shifts: (eT

参考文章(2)
Mark Herbster, Manfred K. Warmuth, Tracking the Best Expert Machine Learning. ,vol. 32, pp. 151- 178 ,(1998) , 10.1023/A:1007424614876
Olivier Bousquet, Manfred K. Warmuth, Tracking a small set of experts by mixing past posteriors european conference on computational learning theory. ,vol. 3, pp. 363- 396 ,(2003) , 10.1007/3-540-44581-1_3