摘要: Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of usefulness data analysis, machine learning, other areas current interest. This paper describes the fundamentals approach, together with variants extensions convergence properties, mostly reference convex objectives. We pay particular attention a certain problem structure that arises frequently learning applications, showing efficient implementations accelerated are possible this type. also present some parallel discuss properties under several models execution.