Back to Search Start Over

The Dynamics of Riemannian Robbins-Monro Algorithms

Authors :
Karimi, Mohammad Reza
Hsieh, Ya-Ping
Mertikopoulos, Panayotis
Krause, Andreas
Loh, Po-Ling
Raginsky, Maxim
Department of Computer Science [ETH Zürich] (D-INFK)
Eidgenössische Technische Hochschule - Swiss Federal Institute of Technology [Zürich] (ETH Zürich)
Performance analysis and optimization of LARge Infrastructures and Systems (POLARIS)
Inria Grenoble - Rhône-Alpes
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG)
Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )
Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )
Université Grenoble Alpes (UGA)
ANR-19-CE48-0018,ALIAS,Apprentissage adaptatif multi-agent(2019)
ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
ANR-11-LABX-0025,PERSYVAL-lab,Systemes et Algorithmes Pervasifs au confluent des mondes physique et numérique(2011)
Source :
arXiv, Proceedings of Thirty Fifth Conference on Learning Theory, COLT 2022-35th Annual Conference on Learning Theory, COLT 2022-35th Annual Conference on Learning Theory, Jul 2022, London, United Kingdom. pp.1-31
Publication Year :
2022
Publisher :
Cornell University, 2022.

Abstract

Many important learning algorithms, such as stochastic gradient methods, are often deployed to solve nonlinear problems on Riemannian manifolds. Motivated by these applications, we propose a family of Riemannian algorithms generalizing and extending the seminal stochastic approximation framework of Robbins and Monro. Compared to their Euclidean counterparts, Riemannian iterative algorithms are much less understood due to the lack of a global linear structure on the manifold. We overcome this difficulty by introducing an extended Fermi coordinate frame which allows us to map the asymptotic behavior of the proposed Riemannian Robbins-Monro (RRM) class of algorithms to that of an associated deterministic dynamical system under very mild assumptions on the underlying manifold. In so doing, we provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes, albeit with a significantly more involved analysis that requires a number of new geometric ingredients. We showcase the flexibility of the proposed RRM framework by using it to establish the convergence of a retraction-based analogue of the popular optimistic / extra-gradient methods for solving minimization problems and games, and we provide a unified treatment for their convergence.<br />arXiv

Details

Language :
English
Database :
OpenAIRE
Journal :
arXiv, Proceedings of Thirty Fifth Conference on Learning Theory, COLT 2022-35th Annual Conference on Learning Theory, COLT 2022-35th Annual Conference on Learning Theory, Jul 2022, London, United Kingdom. pp.1-31
Accession number :
edsair.doi.dedup.....418800a7978415dcd9ae47b2b9f11281