作者: Volkan Cevher , Quoc Tran-Dinh , Olivier Fercoq , Ahmet Alacaoglu
DOI:
关键词:
摘要: We propose a new self-adaptive, double-loop smoothing algorithm to solve composite, nonsmooth, and constrained convex optimization problems. Our is based on Nesterov's technique via general Bregman distance functions. It self-adaptively selects the number of iterations in inner loop achieve desired complexity bound without requiring accuracy priori as variants Augmented Lagrangian methods (ALM). prove $\BigO{\frac{1}{k}}$-convergence rate last iterate outer sequence for both unconstrained settings contrast ergodic rates which are common ALM well alternating direction method-of-multipliers literature. Compared existing inexact or quadratic penalty methods, our analysis does not rely worst-case bounds subproblem solved by loop. Therefore, can be viewed restarting applied ASGARD method \cite{TranDinh2015b} but with rigorous theoretical guarantees an explicit termination rules adaptive parameters. only requires initialize parameters once, automatically update them during iteration process tuning. illustrate superiority several examples compared state-of-the-art.