作者: Filippo Zanella , Damiano Varagnolo , Angelo Cenedese , Gianluigi Pillonetto , Luca Schenato
关键词: Convex optimization 、 Compact convergence 、 Quadratically constrained quadratic program 、 Newton's method 、 Rate of convergence 、 Quadratic programming 、 Differential dynamic programming 、 Mathematics 、 Second-order cone programming 、 Mathematical optimization
摘要: We consider the convergence rates of two convex optimization strategies in context multi agent systems, namely Newton-Raphson consensus and a distributed Gradient-Descent opportunely derived from first. To allow analytical derivations, analyses are performed under simplificative assumption quadratic local cost functions. In this framework we derive sufficient conditions which guarantee algorithms. From these then obtain closed form expressions that can be used to tune parameters for maximizing rate convergence. Despite formulae have been functions assumptions, they as rules-of-thumb tuning algorithms general situations.