11 results on '"Proximal point"'
Search Results
2. Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems
- Author
-
Junfeng Yang and Guoyong Gu
- Subjects
TheoryofComputation_MISCELLANEOUS ,Semidefinite programming ,021103 operations research ,Sublinear function ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Fixed point ,Residual ,01 natural sciences ,Theoretical Computer Science ,Proximal point ,Monotone polygon ,Rate of convergence ,0101 mathematics ,Algorithm ,Software ,Mathematics - Abstract
The tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems is established based on the squared fixed point residual. By using the performance estim...
- Published
- 2020
3. Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- Author
-
Hilal Asi and John C. Duchi
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,021103 operations research ,0211 other engineering and technologies ,Machine Learning (stat.ML) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Proximal point ,Statistics - Machine Learning ,Optimization and Control (math.OC) ,Robustness (computer science) ,Convex optimization ,FOS: Mathematics ,Stochastic optimization ,0101 mathematics ,Mathematics - Optimization and Control ,Subgradient method ,Software ,Mathematics - Abstract
We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods. When the modeling approaches we propose are appropriately accurate, the methods enjoy stronger convergence and robustness guarantees than classical approaches, even though the model-based methods typically add little to no computational overhead over stochastic subgradient methods. For example, we show that improved models converge with probability 1 and enjoy optimal asymptotic normality results under weak assumptions; these methods are also adaptive to a natural class of what we term easy optimization problems, achieving linear convergence under appropriate strong growth conditions on the objective. Our substantial experimental investigation shows the advantages of more accurate modeling over standard subgradient methods across many smooth and non-smooth optimization problems., Comment: To appear in SIAM Journal on Optimization
- Published
- 2019
4. AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER METHODS.
- Author
-
MONTEIRO, RENATO D. C. and SVAITER, B. F.
- Subjects
- *
CONVEX functions , *MATHEMATICAL optimization , *EQUILIBRIUM , *MATHEMATICAL inequalities , *ALGORITHMS - Abstract
This paper presents an accelerated variant of the hybrid proximal extragradient (HPE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework. Iteration-complexity results are established for the A-HPE framework, as well as a special version of it, where a large stepsize condition is imposed. Two specific implementations of the A-HPE framework are described in the context of a structured convex optimization problem whose objective function consists of the sum of a smooth convex function and an extended real-valued nonsmooth convex function. In the first implementation, a generalization of a variant of Nesterov's method is obtained for the case where the smooth component of the objective function has Lipschitz continuous gradient. In the second implementation, an accelerated Newton proximal extragradient (A-NPE) method is obtained for the case where the smooth component of the objective function has Lipschitz continuous Hessian. It is shown that the A-NPE method has a O(1/k7/2) convergence rate, which improves upon the O(1/k³) convergence rate bound for another accelerated Newton-type method presented by Nesterov. Finally, while Nesterov's method is based on exact solutions of subproblems with cubic regularization terms, the A-NPE method is based on inexact solutions of subproblems with quadratic regularization terms and hence is potentially more tractable from a computational point of view. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
5. DISTRIBUTED COMPUTATION OF EQUILIBRIA IN MONOTONE NASH GAMES VIA ITERATIVE REGULARIZATION TECHNIQUES.
- Author
-
KANNAN, ASWIN and SHANBHAG, UDAY V.
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *DISTRIBUTED algorithms , *TIKHONOV regularization , *CONVEX sets , *MATHEMATICAL functions , *EQUILIBRIUM - Abstract
We consider the development of single-timescale schemes for the distributed computation of equilibria associated with Nash games in which each player solves a convex program. Equilibria associated with such games are wholly captured by the solution set of a variational inequality. Our focus is on a class of games, termed monotone Nash games, that lead to monotone variational inequalities. Distributed extensions of standard approaches for solving such variational problems are characterized by two challenges: (1) Unless suitable assumptions (such as strong monotonicity) are imposed on the mapping arising in the specification of the variational inequality, iterative methods often require the solution of a sequence of regularized problems, a naturally two-timescale process that is harder to implement in practice. (2) Additionally, algorithm parameters for all players (such as steplengths and regularization parameters) have to be chosen centrally and communicated to all players; importantly, these parameters cannot be independently chosen by a player. Motivated by these shortcomings, we present two practically implementable distributed regularization schemes that work on a single timescale; specifically, each scheme requires precisely one gradient or projection step at every iteration. Of these, the first is an iterative Tikhonov regularization (ITR) scheme, while the second is an analogously constructed iterative proximal-point (IPP) method. Both schemes are characterized by the property that the regularization/centering parameter are updated after every iteration, rather than when one has approximately solved the regularized problem. To aid in distributed settings requiring limited coordination across players, the schemes allow players to select their parameters independently and do not insist on central prescription of such parameters. We conclude with an application of these schemes on a networked Cournot game with nonlinear prices. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
6. ITERATION-COMPLEXITY OF A NEWTON PROXIMAL EXTRAGRADIENT METHOD FOR MONOTONE VARIATIONAL INEQUALITIES AND INCLUSION PROBLEMS.
- Author
-
MONTEIRO, RENATO D. C. and SVAITER, BENAR F.
- Subjects
- *
ITERATIVE methods (Mathematics) , *EQUATIONS , *NONLINEAR equations , *STOCHASTIC convergence , *NEWTON-Raphson method , *SMOOTHING (Numerical analysis) , *VARIATIONAL inequalities (Mathematics) - Abstract
In a recent paper by Monteiro and Svaiter, a hybrid proximal extragradient (HPE) framework was used to study the iteration-complexity of a first-order (or, in the context of optimization, second-order) method for solving monotone nonlinear equations. The purpose of this paper is to extend this analysis to study a prox-type first-order method for monotone smooth variational inequalities and inclusion problems consisting of the sum of a smooth monotone map and a maximal monotone point-to-set operator. Each iteration of the method computes an approximate solution of a proximal subproblem obtained by linearizing the smooth part of the operator in the corresponding proximal equation for the original problem, which is then used to perform an extragradient step as prescribed by the HPE framework. Both pointwise and ergodic iteration-complexity results are derived for the aforementioned first-order method using corresponding results obtained here for a subfamily of the HPE framework. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
7. IDENTIFYING STRUCTURE OF NONSMOOTH CONVEX FUNCTIONS BY THE BUNDLE TECHNIQUE.
- Author
-
DANIILIDIS, ARIS, SAGASTIZÁBAL, CLAUDIA, and SOLODOV, MIKHAIL
- Subjects
- *
CONVEX functions , *NONSMOOTH optimization , *MATHEMATICAL decomposition , *SET theory , *ITERATIVE methods (Mathematics) , *ALGORITHMS - Abstract
We consider the problem of minimizing nonsmooth convex functions, defined piecewise by a finite number of functions each of which is either convex quadratic or twice continuously differentiable with positive definite Hessian on the set of interest. This is a particular case of functions with primal-dual gradient structure, a notion closely related to the so-called VU space decomposition: At a given point, nonsmoothness is locally restricted to the directions of the subspace V, while along the subspace U the behavior of the function is twice differentiable. Constructive identification of the two subspaces is important, because it opens the way to devising fast algorithms for nonsmooth optimization (by following iteratively the manifold of smoothness on which superlinear U-Newton steps can be computed). In this work we show that, for the class of functions in consideration, the information needed for this identification can be obtained from the output of a standard bundle method for computing proximal points, provided a minimizer satisfies the nondegeneracy and strong transversality conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
8. Proximal Methods in Vector Optimization.
- Author
-
Bonnel, Henri, Iusem, Alfredo Noel, and Svaiter, Benar Fux
- Subjects
- *
VECTOR algebra , *MATHEMATICAL optimization , *ALGORITHMS , *SCALAR field theory , *SEQUENCE spaces - Abstract
We consider the vector optimization problem of finding weakly efficient points for maps from a Hilbert space X to a Banach space Y with respect to the partial order induced by a closed, convex, and pointed cone $C\subset Y$ with a nonempty interior. We develop for this problem an extension of the proximal point method for scalar-valued convex optimization. In this extension, the subproblems consist of finding weakly efficient points for suitable regularizations of the original map. We present both an exact and an inexact version, in which the subproblems are solved only approximately, within a constant relative tolerance. In both cases, we prove weak convergence of the generated sequence to a weakly efficient point, assuming convexity of the map with respect to C and C-completeness of the initial section. In cases where this last assumption fails, we still establish that the generating sequence is, in a suitable sense, a minimizing one. We also exhibit a particular instance of the algorithm for which, under a mild hypothesis on C, the weak limit of the generated sequence is an efficient, rather than a weakly efficient, point. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
9. WEAK CONVERGENCE OF A RELAXED AND INERTIAL HYBRID PROJECTION-PROXIMAL POINT ALGORITHM FOR MAXIMAL MONOTONE OPERATORS IN HILBERT SPACE.
- Author
-
Alvarez, Felipe
- Subjects
- *
HILBERT space , *STOCHASTIC convergence , *EXTRAPOLATION , *GRAPHICAL projection , *MONOTONE operators - Abstract
This paper introduces a general implicit iterative method for finding zeros of a maximal monotone operator in a Hilbert space which unifies three previously studied strategies: relaxation, inertial type extrapolation and projection step. The first two strategies are intended to speed up the convergence of the standard proximal point algorithm, while the third permits one to perform inexact proximal iterations with fixed relative error tolerance. The paper establishes the global convergence of the method for the weak topology under appropriate assumptions on the algorithm parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
10. Generalized Hessian Properties of Regularized Nonsmooth Functions
- Author
-
R. A. Poliquin and R. T. Rockafeller
- Subjects
Hessian matrix ,Proximal point ,symbols.namesake ,Mathematical analysis ,symbols ,Applied mathematics ,Variational analysis ,Convex function ,Regularization (mathematics) ,Software ,Theoretical Computer Science ,Mathematics ,Nonlinear programming - Abstract
We take up the question of second-order expansions for a class of functions of importance in optimization, namely, Moreau envelope regularizations of nonsmooth functions $f$. It is shown that when $f$ is prox-regular, which includes convex functions and the extended real-valued functions representing problems of nonlinear programming, the many second-order properties that can be formulated around the existence and stability of expansions of the envelopes of $f$ or of their gradient mappings are linked by surprisingly extensive lists of equivalences with each other and with generalized differentiation properties of $f$ itself. This clarifies the circumstances conducive to developing computational methods based on envelope functions, such as second-order approximations in nonsmooth optimization and variants of the proximal point algorithm. The results establish that generalized second-order expansions of Moreau envelopes, at least, can be counted on in most situations of interest in finite-dimensional optimization.
- Published
- 1996
11. New Proximal Point Algorithms for Convex Minimization
- Author
-
Osman Güler
- Subjects
Combinatorics ,Discrete mathematics ,Proximal point ,Convex optimization ,Convex function ,Algorithm ,Software ,Theoretical Computer Science ,Mathematics - Abstract
This paper introduces two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function $f: \mathbf{R}^n \to R \cup \{ \infty \}$. Under this minimal assumption on f, ...
- Published
- 1992
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.