Back to Search
Start Over
Decentralized Proximal Gradient Algorithms with Linear Convergence Rates
- Publication Year :
- 2019
-
Abstract
- This work studies a class of non-smooth decentralized multi-agent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common non-smooth term. We propose a general primal-dual algorithmic framework that unifies many existing state-of-the-art algorithms. We establish linear convergence of the proposed method to the exact solution in the presence of the non-smooth term. Moreover, for the more general class of problems with agent specific non-smooth terms, we show that linear convergence cannot be achieved (in the worst case) for the class of algorithms that uses the gradients and the proximal mappings of the smooth and non-smooth parts, respectively. We further provide a numerical counterexample that shows how some state-of-the-art algorithms fail to converge linearly for strongly-convex objectives and different local non-smooth terms.<br />To appear in IEEE Transactions on Automatic Control
- Subjects :
- convex functions
0209 industrial biotechnology
Optimization problem
linear convergence
Computer science
02 engineering and technology
Electronic mail
020901 industrial engineering & automation
gradient tracking
Convergence (routing)
FOS: Mathematics
Symmetric matrix
proximal gradient algorithms
Electrical and Electronic Engineering
approximation algorithms
Mathematics - Optimization and Control
convergence
diffusion
Approximation algorithm
Computer Science Applications
cost function
Rate of convergence
symmetric matrices
Control and Systems Engineering
Optimization and Control (math.OC)
Convex function
decentralized optimization
distributed optimization
Algorithm
electronic mail
unified decentralized algorithm
Counterexample
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....77b2e1784aea291a65743c470b892480