作者: Léon Bottou , Olivier Bousquet
DOI:
关键词: Generalization error 、 Theoretical computer science 、 Computational learning theory 、 Algorithmic learning theory 、 Active learning (machine learning) 、 Mathematical optimization 、 Instance-based learning 、 Proactive learning 、 Empirical risk minimization 、 Computer science 、 Stability (learning theory) 、 Sample exclusion dimension 、 Online machine learning
摘要: This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for case small-scale and large-scale problems. Small-scale problems are subject to usual approximation-estimation tradeoff. Large-scale qualitatively different tradeoff involving computational complexity underlying algorithms in non-trivial ways.