Back to Search Start Over

Implementable tensor methods in unconstrained convex optimization

Authors :
Yurii Nesterov
UCL - SSH/LIDAM/CORE - Center for operations research and econometrics
UCL - SSH/IMMAQ/CORE - Center for operations research and econometrics
Source :
Mathematical programming, Vol. 186, no. 1-2, p. 157-183 (2021), Mathematical Programming
Publication Year :
2021
Publisher :
Springer, 2021.

Abstract

In this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes. Finally, for the third-order methods, we suggest an efficient technique for solving the auxiliary problem, which is based on the recently developed relative smoothness condition (Bauschke et al. in Math Oper Res 42:330–348, 2017; Lu et al. in SIOPT 28(1):333–354, 2018). With this elaboration, the third-order methods become implementable and very fast. The rate of convergence in terms of the function value for the accelerated third-order scheme reaches the level $$O\left( {1 \over k^4}\right) $$ O 1 k 4 , where k is the number of iterations. This is very close to the lower bound of the order $$O\left( {1 \over k^5}\right) $$ O 1 k 5 , which is also justified in this paper. At the same time, in many important cases the computational cost of one iteration of this method remains on the level typical for the second-order methods.

Details

Language :
English
Database :
OpenAIRE
Journal :
Mathematical programming, Vol. 186, no. 1-2, p. 157-183 (2021), Mathematical Programming
Accession number :
edsair.doi.dedup.....f117d6b9c24911b76c24692cec2bfd02