Back to Search
Start Over
Single Point-Based Distributed Zeroth-Order Optimization with a Non-Convex Stochastic Objective Function
- Source :
- Proceedings of the 40th International Conference on Machine Learning, PMLR 202:24701-24719, 2023
- Publication Year :
- 2024
-
Abstract
- Zero-order (ZO) optimization is a powerful tool for dealing with realistic constraints. On the other hand, the gradient-tracking (GT) technique proved to be an efficient method for distributed optimization aiming to achieve consensus. However, it is a first-order (FO) method that requires knowledge of the gradient, which is not always possible in practice. In this work, we introduce a zero-order distributed optimization method based on a one-point estimate of the gradient tracking technique. We prove that this new technique converges with a single noisy function query at a time in the non-convex setting. We then establish a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ after a number of iterations K, which competes with that of $O(\frac{1}{\sqrt[4]{K}})$ of its centralized counterparts. Finally, a numerical example validates our theoretical results.<br />Comment: In this version, we slightly modify the proof of Theorem 3.7 in the original publication. We remove the expectation in the proof that was added by error. The original publication can be found at: https://proceedings.mlr.press/v202/mhanna23a.html
- Subjects :
- Computer Science - Machine Learning
Mathematics - Optimization and Control
Subjects
Details
- Database :
- arXiv
- Journal :
- Proceedings of the 40th International Conference on Machine Learning, PMLR 202:24701-24719, 2023
- Publication Type :
- Report
- Accession number :
- edsarx.2410.05942
- Document Type :
- Working Paper