Back to Search
Start Over
Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems.
- Source :
-
Optimization Methods & Software . Jun2024, p1-23. 23p. 1 Illustration. - Publication Year :
- 2024
-
Abstract
- In this paper, we study the gradient descent–ascent method for convex–concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class of strongly convex-strongly concave saddle-point problems for one iteration. We also investigate the algorithm without strong convexity and we provide some necessary and sufficient conditions under which the gradient descent–ascent enjoys linear convergence. [ABSTRACT FROM AUTHOR]
- Subjects :
- *SEMIDEFINITE programming
*CONJUGATE gradient methods
*ALGORITHMS
Subjects
Details
- Language :
- English
- ISSN :
- 10556788
- Database :
- Academic Search Index
- Journal :
- Optimization Methods & Software
- Publication Type :
- Academic Journal
- Accession number :
- 177987867
- Full Text :
- https://doi.org/10.1080/10556788.2024.2360040