Back to Search
Start Over
One-Point Gradient-Free Methods for Smooth and Non-smooth Saddle-Point Problems
- Source :
- Mathematical Optimization Theory and Operations Research ISBN: 9783030778750, MOTOR
- Publication Year :
- 2021
- Publisher :
- Springer International Publishing, 2021.
-
Abstract
- In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point problems \(\min _{x}\max _{y} \varphi (x, y)\). For non-smooth and smooth cases, we present an analysis in a general geometric setup with the arbitrary Bregman divergence. For problems with higher order smoothness, the analysis is carried out only in the Euclidean case. The estimates we have obtained repeat the best currently known estimates of gradient-free methods with one-point feedback for problems of imagining a convex or strongly convex function. The paper uses three main approaches to recovering the gradient through finite differences: standard with a random direction, as well as its modifications with kernels and residual feedback. We also provide experiments to compare these approaches for the matrix game.
Details
- ISBN :
- 978-3-030-77875-0
- ISBNs :
- 9783030778750
- Database :
- OpenAIRE
- Journal :
- Mathematical Optimization Theory and Operations Research ISBN: 9783030778750, MOTOR
- Accession number :
- edsair.doi...........7b84cee702616ba1f0107ea8a8c81235