Back to Search
Start Over
Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization
- Publication Year :
- 2019
-
Abstract
- In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks. (C) 2020 Elsevier Ltd. All rights reserved.
- Subjects :
- 0209 industrial biotechnology
Mathematical optimization
coordination
linear convergence
Computer science
Stability (learning theory)
MathematicsofComputing_NUMERICALANALYSIS
02 engineering and technology
020901 industrial engineering & automation
Convergence (routing)
FOS: Mathematics
0202 electrical engineering, electronic engineering, information engineering
Electrical and Electronic Engineering
Mathematics - Optimization and Control
arrow-hurwicz
Augmented Lagrangian method
020208 electrical & electronic engineering
stability
convex-optimization
Exponential function
Rate of convergence
Optimization and Control (math.OC)
Control and Systems Engineering
augmented lagrangian
Convex optimization
saddle-point problems
Minification
primal-dual methods
distributed optimization
Gradient method
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....b96e7355220b17d2460dc1b341d582d1