作者: Edward Moroshko , Koby Crammer , Nina Vaits
DOI:
关键词:
摘要: The goal of a learner in standard online learning, is to have the cumulative loss not much larger compared with best-performing function from some fixed class. Numerous algorithms were shown this gap arbitrarily close zero, best that chosen off-line. Nevertheless, many real-world applications, such as adaptive filtering, are non-stationary nature, and prediction may drift over time. We introduce two novel for regression, designed work well environment. Our first algorithm performs resets forget history, while second last-step min-max optimal context drift. analyze both worst-case regret framework show they maintain an average slowly changing sequence linear functions, long sublinear. In addition, stationary case, when no occurs, our suffer logarithmic regret, previous algorithms. bounds improve existing ones, simulations demonstrate usefulness these other state-of-the-art approaches.