Back to Search Start Over

Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization

Authors :
Sulaiman A. Alghunaim
Ali H. Sayed
Publication Year :
2019

Abstract

In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks. (C) 2020 Elsevier Ltd. All rights reserved.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....b96e7355220b17d2460dc1b341d582d1