Back to Search
Start Over
Stochastic Approximation Proximal Subgradient Method for Stochastic Convex-Concave Minimax Optimization
- Publication Year :
- 2024
-
Abstract
- This paper presents a stochastic approximation proximal subgradient (SAPS) method for stochastic convex-concave minimax optimization. By accessing unbiased and variance bounded approximate subgradients, we show that this algorithm exhibits ${\rm O}(N^{-1/2})$ expected convergence rate of the minimax optimality measure if the parameters in the algorithm are properly chosen, where $N$ denotes the number of iterations. Moreover, we show that the algorithm has ${\rm O}(\log(N)N^{-1/2})$ minimax optimality measure bound with high probability. Further we study a specific stochastic convex-concave minimax optimization problems arising from stochastic convex conic optimization problems, which the the bounded subgradient condition is fail. To overcome the lack of the bounded subgradient conditions in convex-concave minimax problems, we propose a linearized stochastic approximation augmented Lagrange (LSAAL) method and prove that this algorithm exhibits ${\rm O}(N^{-1/2})$ expected convergence rate for the minimax optimality measure and ${\rm O}(\log^2(N)N^{-1/2})$ minimax optimality measure bound with high probability as well. Preliminary numerical results demonstrate the effect of the SAPS and LSAAL methods.
- Subjects :
- Mathematics - Optimization and Control
90C30
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2403.20205
- Document Type :
- Working Paper