作者: Zheng Qu , Fei Li
DOI:
关键词:
摘要: We propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. consider arbitrary linearly convergent solver including in particular stochastic algorithms, making the resulting more scalable facing ever-increasing dimension. Each subproblem is solved inexactly and self-adaptive stopping criterion, without requiring to set a priori target accuracy. When primal dual domain are bounded, our method achieves $O(1/\sqrt{\epsilon})$ $O(1/{\epsilon})$ complexity bound terms of number iterations, respectively strongly non-strongly case. Without boundedness assumption, only logarithm need be added above two bounds increase $\tilde O(1/\sqrt{\epsilon})$ O(1/{\epsilon})$, which hold both obtaining $\epsilon$-optimal $\epsilon$-KKT solution. Within general that we propose, also obtain O(1/{\epsilon})$ O(1/{\epsilon^2})$ under relative smoothness assumption on differentiable component objective function. show through theoretical analysis as well numerical experiments computational speedup possibly achieved by use randomized solvers large-scale