17 results on '"Proximal point"'
Search Results
2. A proximal alternating linearization method for nonconvex optimization problems.
- Author
-
Li, Dan, Pang, Li-Ping, and Chen, Shuang
- Subjects
- *
NONSMOOTH optimization , *NONCONVEX programming , *APPROXIMATION theory , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
In this paper, we focus on the problems of minimizing the sum of two nonsmooth functions which are possibly nonconvex. These problems arise in many applications of practical interests. We present a proximal alternating linearization algorithm which alternately generates two approximate proximal points of the original objective function. It is proved that the accumulation points of iterations converge to a stationary point of the problem. Numerical experiments validate the theoretical convergence analysis and verify the implementation of the proposed algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
3. Backward–backward splitting in Hadamard spaces.
- Author
-
Banert, Sebastian
- Subjects
- *
SPLITTING extrapolation method , *NUMERICAL analysis , *HADAMARD matrices , *HILBERT space , *FUNCTION spaces , *ALGORITHMS , *MAXIMA & minima , *CONVEX functions - Abstract
Abstract: The backward–backward algorithm is a tool for finding minima of a regularization of the sum of two convex functions in Hilbert spaces. We generalize this setting to Hadamard spaces and prove the convergence of an error-tolerant version of the backward–backward method. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
4. AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER METHODS.
- Author
-
MONTEIRO, RENATO D. C. and SVAITER, B. F.
- Subjects
- *
CONVEX functions , *MATHEMATICAL optimization , *EQUILIBRIUM , *MATHEMATICAL inequalities , *ALGORITHMS - Abstract
This paper presents an accelerated variant of the hybrid proximal extragradient (HPE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework. Iteration-complexity results are established for the A-HPE framework, as well as a special version of it, where a large stepsize condition is imposed. Two specific implementations of the A-HPE framework are described in the context of a structured convex optimization problem whose objective function consists of the sum of a smooth convex function and an extended real-valued nonsmooth convex function. In the first implementation, a generalization of a variant of Nesterov's method is obtained for the case where the smooth component of the objective function has Lipschitz continuous gradient. In the second implementation, an accelerated Newton proximal extragradient (A-NPE) method is obtained for the case where the smooth component of the objective function has Lipschitz continuous Hessian. It is shown that the A-NPE method has a O(1/k7/2) convergence rate, which improves upon the O(1/k³) convergence rate bound for another accelerated Newton-type method presented by Nesterov. Finally, while Nesterov's method is based on exact solutions of subproblems with cubic regularization terms, the A-NPE method is based on inexact solutions of subproblems with quadratic regularization terms and hence is potentially more tractable from a computational point of view. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
5. Hybrid extragradient proximal algorithm coupled with parametric approximation and penalty/barrier methods.
- Author
-
Carrasco, Miguel
- Subjects
- *
ALGORITHMS , *APPROXIMATION theory , *CONVEX functions , *STOCHASTIC convergence , *SET theory , *TIKHONOV regularization , *MATHEMATICAL proofs - Abstract
In this article we study thehybrid extragradient methodcoupled with approximation and penalty schemes for convex minimization problems. Under certain hypotheses, which include, for example, the case of Tikhonov regularization, we prove asymptotic convergence of the method to the solution set of our minimization problem. When we use schemes of penalization or barrier, we can show asymptotic convergence using the well-known fast/slow parameterization techniques and exploiting the existence and finite length of an optimal path. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
6. DISTRIBUTED COMPUTATION OF EQUILIBRIA IN MONOTONE NASH GAMES VIA ITERATIVE REGULARIZATION TECHNIQUES.
- Author
-
KANNAN, ASWIN and SHANBHAG, UDAY V.
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *DISTRIBUTED algorithms , *TIKHONOV regularization , *CONVEX sets , *MATHEMATICAL functions , *EQUILIBRIUM - Abstract
We consider the development of single-timescale schemes for the distributed computation of equilibria associated with Nash games in which each player solves a convex program. Equilibria associated with such games are wholly captured by the solution set of a variational inequality. Our focus is on a class of games, termed monotone Nash games, that lead to monotone variational inequalities. Distributed extensions of standard approaches for solving such variational problems are characterized by two challenges: (1) Unless suitable assumptions (such as strong monotonicity) are imposed on the mapping arising in the specification of the variational inequality, iterative methods often require the solution of a sequence of regularized problems, a naturally two-timescale process that is harder to implement in practice. (2) Additionally, algorithm parameters for all players (such as steplengths and regularization parameters) have to be chosen centrally and communicated to all players; importantly, these parameters cannot be independently chosen by a player. Motivated by these shortcomings, we present two practically implementable distributed regularization schemes that work on a single timescale; specifically, each scheme requires precisely one gradient or projection step at every iteration. Of these, the first is an iterative Tikhonov regularization (ITR) scheme, while the second is an analogously constructed iterative proximal-point (IPP) method. Both schemes are characterized by the property that the regularization/centering parameter are updated after every iteration, rather than when one has approximately solved the regularized problem. To aid in distributed settings requiring limited coordination across players, the schemes allow players to select their parameters independently and do not insist on central prescription of such parameters. We conclude with an application of these schemes on a networked Cournot game with nonlinear prices. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
7. ITERATION-COMPLEXITY OF A NEWTON PROXIMAL EXTRAGRADIENT METHOD FOR MONOTONE VARIATIONAL INEQUALITIES AND INCLUSION PROBLEMS.
- Author
-
MONTEIRO, RENATO D. C. and SVAITER, BENAR F.
- Subjects
- *
ITERATIVE methods (Mathematics) , *EQUATIONS , *NONLINEAR equations , *STOCHASTIC convergence , *NEWTON-Raphson method , *SMOOTHING (Numerical analysis) , *VARIATIONAL inequalities (Mathematics) - Abstract
In a recent paper by Monteiro and Svaiter, a hybrid proximal extragradient (HPE) framework was used to study the iteration-complexity of a first-order (or, in the context of optimization, second-order) method for solving monotone nonlinear equations. The purpose of this paper is to extend this analysis to study a prox-type first-order method for monotone smooth variational inequalities and inclusion problems consisting of the sum of a smooth monotone map and a maximal monotone point-to-set operator. Each iteration of the method computes an approximate solution of a proximal subproblem obtained by linearizing the smooth part of the operator in the corresponding proximal equation for the original problem, which is then used to perform an extragradient step as prescribed by the HPE framework. Both pointwise and ergodic iteration-complexity results are derived for the aforementioned first-order method using corresponding results obtained here for a subfamily of the HPE framework. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
8. A note on the convergence of an inertial version of a diagonal hybrid projection-point algorithm.
- Author
-
Carrasco, Miguel and Pichard, Karine
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *GRAPHICAL projection , *INERTIAL confinement fusion , *INTEGRAL theorems - Abstract
In this note, an inertial and relaxed version of a diagonal hybrid projection-proximal point algorithm is considered, in order to find the minimum of a function f approximated by a sequence of functions (in general, smoother than f or taking into account some constraints of the problem). Two convergence theorems are proved under different kind of assumptions, which allows to apply the method in various cases. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
9. Metric subregularity and the proximal point method
- Author
-
Leventhal, D.
- Subjects
- *
MONOTONE operators , *CONVERGENT strabismus , *OPERATOR theory , *DECISION theory , *RESOLVENTS (Mathematics) , *LINEAR statistical models - Abstract
Abstract: We examine the linear convergence rates of variants of the proximal point method for finding zeros of maximal monotone operators. We begin by showing how metric subregularity is sufficient for local linear convergence to a zero of a maximal monotone operator. This result is then generalized to obtain convergence rates for the problem of finding a common zero of multiple monotone operators by considering randomized and averaged proximal methods. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
10. IDENTIFYING STRUCTURE OF NONSMOOTH CONVEX FUNCTIONS BY THE BUNDLE TECHNIQUE.
- Author
-
DANIILIDIS, ARIS, SAGASTIZÁBAL, CLAUDIA, and SOLODOV, MIKHAIL
- Subjects
- *
CONVEX functions , *NONSMOOTH optimization , *MATHEMATICAL decomposition , *SET theory , *ITERATIVE methods (Mathematics) , *ALGORITHMS - Abstract
We consider the problem of minimizing nonsmooth convex functions, defined piecewise by a finite number of functions each of which is either convex quadratic or twice continuously differentiable with positive definite Hessian on the set of interest. This is a particular case of functions with primal-dual gradient structure, a notion closely related to the so-called VU space decomposition: At a given point, nonsmoothness is locally restricted to the directions of the subspace V, while along the subspace U the behavior of the function is twice differentiable. Constructive identification of the two subspaces is important, because it opens the way to devising fast algorithms for nonsmooth optimization (by following iteratively the manifold of smoothness on which superlinear U-Newton steps can be computed). In this work we show that, for the class of functions in consideration, the information needed for this identification can be obtained from the output of a standard bundle method for computing proximal points, provided a minimizer satisfies the nondegeneracy and strong transversality conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
11. Computing proximal points of nonconvex functions.
- Author
-
Hare, Warren and Sagastizábal, Claudia
- Subjects
- *
NONCONVEX programming , *MATHEMATICAL programming , *MATHEMATICAL optimization , *CONVEX functions , *STOCHASTIC convergence , *MATHEMATICAL functions - Abstract
The proximal point mapping is the basis of many optimization techniques for convex functions. By means of variational analysis, the concept of proximal mapping was recently extended to nonconvex functions that are prox-regular and prox-bounded. In such a setting, the proximal point mapping is locally Lipschitz continuous and its set of fixed points coincide with the critical points of the original function. This suggests that the many uses of proximal points, and their corresponding proximal envelopes (Moreau envelopes), will have a natural extension from convex optimization to nonconvex optimization. For example, the inexact proximal point methods for convex optimization might be redesigned to work for nonconvex functions. In order to begin the practical implementation of proximal points in a nonconvex setting, a first crucial step would be to design efficient methods of approximating nonconvex proximal points. This would provide a solid foundation on which future design and analysis for nonconvex proximal point methods could flourish. In this paper we present a methodology based on the computation of proximal points of piecewise affine models of the nonconvex function. These models can be built with only the knowledge obtained from a black box providing, for each point, the function value and one subgradient. Convergence of the method is proved for the class of nonconvex functions that are prox-bounded and lower- $${\mathcal{C}}^2$$ and encouraging preliminary numerical testing is reported. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
12. A new family of penalties for augmented Lagrangian methods.
- Author
-
Matioli, L. C. and Gonzaga, C. C.
- Subjects
- *
LAGRANGE equations , *ALGORITHMS , *ELLIPSOIDS , *LOGARITHMS , *GEOMETRIC surfaces - Abstract
We study a family of penalty functions for augmented Lagrangian methods, and concentrate on a penalty based on the modified logarithmic barrier function. The convex conjugate of this penalty induces a Bregman distance, and the dual iterates associated with the augmented Lagrangian algorithm correspond to the iterates produced by a proximal point algorithm based on this distance. The global convergence of the dual iterates is then proved. Moreover, the level curves of the quadratic approximation of the dual kernels associated with these penalty functions are the Dikin ellipsoids. Copyright © 2008 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
13. ASYMPTOTIC CONVERGENCE ANALYSIS OF A NEW CLASS OF PROXIMAL POINT METHODS.
- Author
-
Hager, William W. and Hongchao Zhang
- Subjects
- *
STOCHASTIC convergence , *NONLINEAR functional analysis , *HILBERT space , *BANACH spaces , *INNER product spaces , *INVARIANT subspaces , *VECTOR spaces , *INDEFINITE inner product spaces , *MATHEMATICAL analysis - Abstract
Finite dimensional local convergence results for self-adaptive proximal point methods and nonlinear functions with multiple minimizers are generalized and extended to a Hilbert space setting. The principle assumption is a local error bound condition which relates the growth in the function to the distance to the set of minimizers. A local convergence result is established for almost exact iterates. Less restrictive acceptance criteria for the proximal iterates are also analyzed. These criteria are expressed in terms of a subdifferential of the proximal function and either a subdifferential of the original function or an iteration difference. If the proximal regularization parameter μ(x) is sufficiently small and bounded away from zero and f is sufficiently smooth, then there is local linear convergence to the set of minimizers. For a locally convex function, a convergence result similar to that for almost exact iterates is established. For a locally convex solution set and smooth functions, it is shown that if the proximal regularization parameter has the form μ(x) = β∥f′[x]∥η, where η ϵ (0, 2), then the convergence is at least superlinear if η ϵ (0, 1) and at least quadratic if η [1, 2). [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
14. Proximal Methods in Vector Optimization.
- Author
-
Bonnel, Henri, Iusem, Alfredo Noel, and Svaiter, Benar Fux
- Subjects
- *
VECTOR algebra , *MATHEMATICAL optimization , *ALGORITHMS , *SCALAR field theory , *SEQUENCE spaces - Abstract
We consider the vector optimization problem of finding weakly efficient points for maps from a Hilbert space X to a Banach space Y with respect to the partial order induced by a closed, convex, and pointed cone $C\subset Y$ with a nonempty interior. We develop for this problem an extension of the proximal point method for scalar-valued convex optimization. In this extension, the subproblems consist of finding weakly efficient points for suitable regularizations of the original map. We present both an exact and an inexact version, in which the subproblems are solved only approximately, within a constant relative tolerance. In both cases, we prove weak convergence of the generated sequence to a weakly efficient point, assuming convexity of the map with respect to C and C-completeness of the initial section. In cases where this last assumption fails, we still establish that the generating sequence is, in a suitable sense, a minimizing one. We also exhibit a particular instance of the algorithm for which, under a mild hypothesis on C, the weak limit of the generated sequence is an efficient, rather than a weakly efficient, point. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
15. On the convergence of a modified version of SVM light algorithm.
- Author
-
Palagi, L. and Sciandrone, M.
- Subjects
- *
ALGORITHMS , *QUADRATIC programming , *MATHEMATICAL programming , *OPERATIONS research , *SYSTEM analysis , *MATHEMATICAL optimization - Abstract
In this work, we consider the convex quadratic programming problem arising in support vector machine (SVM), which is a technique designed to solve a variety of learning and pattern recognition problems. Since the Hessian matrix is dense and real applications lead to large-scale problems, several decomposition methods have been proposed, which split the original problem into a sequence of smaller subproblems. SVM light algorithm is a commonly used decomposition method for SVM, and its convergence has been proved only recently under a suitable block-wise convexity assumption on the objective function. In SVM light algorithm, the size q of the working set, i.e. the dimension of the subproblem, can be any even number. In the present paper, we propose a decomposition method on the basis of a proximal point modification of the subproblem and the basis of a working set selection rule that includes, as a particular case, the one used by the SVM light algorithm. We establish the asymptotic convergence of the method, for any size q ≥ 2 of the working set, and without requiring any further block-wise convexity assumption on the objective function. Furthermore, we show that the algorithm satisfies in a finite number of iterations a stopping criterion based on the violation of the optimality conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
16. WEAK CONVERGENCE OF A RELAXED AND INERTIAL HYBRID PROJECTION-PROXIMAL POINT ALGORITHM FOR MAXIMAL MONOTONE OPERATORS IN HILBERT SPACE.
- Author
-
Alvarez, Felipe
- Subjects
- *
HILBERT space , *STOCHASTIC convergence , *EXTRAPOLATION , *GRAPHICAL projection , *MONOTONE operators - Abstract
This paper introduces a general implicit iterative method for finding zeros of a maximal monotone operator in a Hilbert space which unifies three previously studied strategies: relaxation, inertial type extrapolation and projection step. The first two strategies are intended to speed up the convergence of the standard proximal point algorithm, while the third permits one to perform inexact proximal iterations with fixed relative error tolerance. The paper establishes the global convergence of the method for the weak topology under appropriate assumptions on the algorithm parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
17. Reconstruction of functions from prescribed proximal points.
- Author
-
Combettes, Patrick L. and Woodstock, Zev C.
- Subjects
- *
HARMONIC analysis (Mathematics) , *FUNCTION spaces , *ALGORITHMS , *HILBERT functions , *HILBERT space , *NONEXPANSIVE mappings - Abstract
Under investigation is the problem of finding the best approximation of a function in a Hilbert space subject to convex constraints and prescribed nonlinear transformations. We show that in many instances these prescriptions can be represented using firmly nonexpansive operators, even when the original observation process is discontinuous. The proposed framework thus captures a large body of classical and contemporary best approximation problems arising in areas such as harmonic analysis, statistics, interpolation theory, and signal processing. The resulting problem is recast in terms of a common fixed point problem and solved with a new block-iterative algorithm that features approximate projections onto the individual sets as well as an extrapolated relaxation scheme that exploits the possible presence of affine constraints. A numerical application to signal recovery is demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.