作者: Quoc Tran-Dinh
DOI: 10.1007/S10589-018-0033-Z
关键词:
摘要: We develop two new proximal alternating penalty algorithms to solve a wide range class of constrained convex optimization problems. Our approach mainly relies on novel combination the classical quadratic penalty, minimization, Nesterov’s acceleration, adaptive strategy for parameters. The first algorithm is designed generic and possibly nonsmooth problems without requiring any Lipschitz gradient continuity or strong convexity, while achieving best-known $$\mathcal {O}\left( \frac{1}{k}\right) $$ -convergence rate in non-ergodic sense, where k iteration counter. second also non-strongly convex, but semi-strongly This can achieve \frac{1}{k^2}\right) primal problem. Such obtained cases: (1) averaging only iterate sequence strongly term, (2) using operators this term averaging. In both algorithms, we allow one linearize subproblem use operator corresponding objective term. Then, customize our methods different problems, lead variants. As byproduct, these preserve same convergence guarantees as main algorithms. verify theoretical development via numerical examples compare with some existing state-of-the-art