16 results on '"Local convergence"'
Search Results
2. APPROXIMATE MATRIX AND TENSOR DIAGONALIZATION BY UNITARY TRANSFORMATIONS: CONVERGENCE OF JACOBI-TYPE ALGORITHMS.
- Author
-
USEVICH, KONSTANTIN, JIANZE LI, and COMON, PIERRE
- Subjects
- *
UNITARY transformations , *COMPLEX matrices , *ALGORITHMS , *MATRICES (Mathematics) - Abstract
We propose a gradient-based Jacobi algorithm for a class of maximization problems on the unitary group, with a focus on approximate diagonalization of complex matrices and tensors by unitary transformations. We provide weak convergence results, and prove local linear convergence of this algorithm. The convergence results also apply to the case of real-valued tensors. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. ANDERSON-ACCELERATED CONVERGENCE OF PICARD ITERATIONS FOR INCOMPRESSIBLE NAVIER-STOKES EQUATIONS.
- Author
-
POLLOCK, SARA, REBHOLZ, LEO G., and XIAO, MENGYING
- Subjects
- *
NAVIER-Stokes equations , *REYNOLDS number - Abstract
We propose, analyze, and test Anderson-accelerated Picard iterations for solving the incompressible Navier-Stokes equations (NSE). Anderson acceleration has recently gained interest as a strategy to accelerate linear and nonlinear iterations, based on including an optimization step in each iteration. We extend the Anderson acceleration theory to the steady NSE setting and prove that the acceleration improves the convergence rate of the Picard iteration based on the success of the underlying optimization problem. The convergence is demonstrated in several numerical tests, with particularly marked improvement in the higher Reynolds number regime. Our tests show it can be an enabling technology in the sense that it can provide convergence when both usual Picard and Newton iterations fail. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
4. ALTERNATING LEAST SQUARES AS MOVING SUBSPACE CORRECTION.
- Author
-
OSELEDETS, IVAN V., RAKHUBA, MAXIM V., and USCHMAJEW, ANDRÉ
- Subjects
- *
MATHEMATICAL analysis , *ITERATIVE methods (Mathematics) , *MATHEMATICS , *APPROXIMATION theory , *STOCHASTIC convergence - Abstract
In this note we take a new look at the local convergence of alternating optimization methods for low-rank matrices and tensors. Our abstract interpretation as sequential optimization on moving subspaces yields insightful reformulations of some known convergence conditions that focus on the interplay between the contractivity of classical multiplicative Schwarz methods with overlapping subspaces and the curvature of low-rank matrix and tensor manifolds. While the verification of the abstract conditions in concrete scenarios remains open in most cases, we are able to provide an alternative and conceptually simple derivation of the asymptotic convergence rate of the twosided block power method of numerical algebra for computing the dominant singular subspaces of a rectangular matrix. This method is equivalent to an alternating least squares method applied to a distance function. The theoretical results are illustrated and validated by numerical experiments [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. ON ERROR BOUNDS AND MULTIPLIER METHODS FOR VARIATIONAL PROBLEMS IN BANACH SPACES.
- Author
-
KANZOW, CHRISTIAN and STECK, DANIEL
- Subjects
- *
BANACH spaces , *MATHEMATICAL inequalities , *LAGRANGE equations , *NASH equilibrium , *NUMERICAL analysis - Abstract
This paper deals with a general form of variational problems in Banach spaces which encompasses variational inequalities as well as minimization problems. We prove a characterization of local error bounds for the distance to the (primal-dual) solution set and give a sufficient condition for such an error bound to hold. In the second part of the paper, we consider an algorithm of augmented Lagrangian type for the solution of such variational problems. We give some global convergence properties of the method and then use the error bound theory to provide estimates for the rate of convergence and to deduce boundedness of the sequence of penalty parameters. Finally, numerical results for optimal control, Nash equilibrium problems, and elliptic parameter estimation problems are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. A BLOCK PRECONDITIONED HARMONIC PROJECTION METHOD FOR LARGE-SCALE NONLINEAR EIGENVALUE PROBLEMS.
- Author
-
FEI XUE
- Subjects
- *
EIGENVALUE equations , *NONLINEAR theories - Abstract
We propose a block preconditioned harmonic projection (BPHP) method for solving large-scale nonlinear eigenproblems of the form T(λ)v = 0. Similar to classical preconditioned eigensolvers such as the locally optimal block preconditioned conjugate gradient method and preconditioned Lanczos, BPHP aims at computing a few eigenvalues of the nonlinear problem close to a specified shift, using preconditioners that enhance the local spectrum, without the need for exact solution of large shifted linear systems. We explore the development of search subspaces, stabilized preconditioning, nonlinear harmonic Rayleigh-Ritz projections, thick restart, and soft deflation capable of resolving linearly dependent eigenvectors. Numerical experiments show that BPHP with a good preconditioner is storage efficient, and it exhibits robust convergence. A moving-window-style partial deflation enables BPHP to reliably compute a large number of eigenvalues. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. A SUPERQUADRATIC VARIANT OF NEWTON'S METHOD.
- Author
-
POTRA, FLORIAN A.
- Subjects
- *
NEWTON-Raphson method , *OPERATOR equations , *BANACH spaces , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
This paper presents a Q-superquadratically convergent version of Newton's method for solving operator equations in Banach spaces that requires only one operator value and one inverse of the Fréchet derivative per iteration. The R-order of convergence is at least 1+√2. Both a semilocal and a local analysis of the new method are given. The semilocal analysis is done along the lines of the Newton-Kantorovich theorem and provides sufficient conditions for the existence of a solution and the convergence of the iterates. The Q-superquadratic convergence is obtained by assuming that the second Fréchet derivative is Lipschitz continuous. The local analysis assumes that a solution exists and shows that the method converges from any starting point belonging to an explicitly defined neighborhood of the solution called the ball of attraction. To our knowledge this is the first superquadratically convergent method that requires about the same work per iteration as Newton's method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
8. CONVERGENCE ANALYSIS FOR THE MULTIPLICATIVE SCHWARZ PRECONDITIONED INEXACT NEWTON ALGORITHM.
- Author
-
LIU, LULU and KEYES, DAVID E.
- Subjects
- *
STOCHASTIC convergence , *MULTIPLICATION , *MATHEMATICAL decomposition , *NONLINEAR theories , *QUADRATIC equations - Abstract
The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. INEXACT NEWTON METHODS AND DENNIS-MORÉ THEOREMS FOR NONSMOOTH GENERALIZED EQUATIONS.
- Author
-
LAURAIN, ANTOINE and WALKER, SHAWN W.
- Subjects
- *
PARTIAL differential equations , *LABS on a chip , *SURFACE tension , *STRUCTURAL optimization , *BOUNDARY value problems , *SOLID-liquid interfaces - Abstract
In this paper we study local convergence of inexact Newton methods of the form (Æ'(xk) + Ak(xk+1 - xk) + F(xk+1) ∩ Rk(xk) ≠ à with Ak ∈ H(xk) for solving the generalized equation Æ'(x) + F(x) ∋ 0 in Banach spaces, where the function Æ' is continuous but not necessarily smooth and Æ' is a set-valued mapping with closed graph. The mapping H plays the role of a generalized set-valued derivative of f which in finite dimensions may be represented by Clarke's generalized Jacobian, while in Banach spaces it may be identified with Ioffe's strict prederivative. The set-valued mappings Rk represent inexactness. We utilize conditions divided into three groups: the first concerns the kind of nonsmoothness of the function f, the second involves metric regularity properties of an approximation of the mapping Æ'+F, and the third is about the sequence of mappings Rk. Under various combinations of these conditions we show linear, superlinear, or quadratic convergence of the method. In the second part of the paper we give two generalizations of the Dennis-Moré theorem. As corollaries, we obtain results regarding convergence of inexact semismooth quasi-Newton-type methods and Dennis-Moré theorems for semismooth equations. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
10. A MAX-PLUS DUAL SPACE FUNDAMENTAL SOLUTION FOR A CLASS OF OPERATOR DIFFERENTIAL RICCATI EQUATIONS.
- Author
-
CIBULKA, R., DONTCHEV, A. L., and GEOFFROY, M. H.
- Subjects
- *
QUASI-Newton methods , *JACOBIAN matrices , *BANACH spaces , *SET-valued maps , *ITERATIVE methods (Mathematics) - Abstract
A new fundamental solution semigroup for operator differential Riccati equations is developed. This fundamental solution semigroup is constructed via an auxiliary finite horizon optimal control problem whose value functional growth with respect to time horizon is determined by a particular solution of the operator differential Riccati equation of interest. By exploiting semiconvexity of this value functional, and the attendant max-plus linearity and semigroup properties of the associated dynamic programming evolution operator, a semigroup of max-plus integral operators is constructed in a dual space defined via the Legendre-Fenchel transform. It is demonstrated that this semigroup of max-plus integral operators can be used to propagate all solutions of the operator differential Riccati equation that are initialized from a specified class of initial conditions. As this semigroup of max-plus integral operators can be identified with a semigroup of quadratic kernels, an explicit recipe for the aforementioned solution propagation is also rendered possible. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. CONVERGENCE ANALYSIS FOR ANDERSON ACCELERATION.
- Author
-
TOTH, ALEX and KELLEY, C. T.
- Subjects
- *
STOCHASTIC convergence , *FIXED point theory , *ITERATIVE methods (Mathematics) , *COEFFICIENTS (Statistics) , *MATHEMATICAL optimization - Abstract
Anderson(m) is a method for acceleration of fixed point iteration which stores m+1 prior evaluations of the fixed point map and computes the new iteration as a linear combination of those evaluations. Anderson(0) is fixed point iteration. In this paper we show that Anderson(m) is locally r-linearly convergent if the fixed point map is a contraction and the coefficients in the linear combination remain bounded. Without assumptions on the coefficients, we prove q-linear convergence of Anderson(1) and, in the case of linear problems, Anderson(m). We observe that the optimization problem for the coefficients can be formulated and solved in nonstandard ways and report on numerical experiments which illustrate the ideas. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. ABSTRACT NEWTONIAN FRAMEWORKS AND THEIR APPLICATIONS.
- Author
-
IZMAILOV, A. F. and KURENNOY, A. S.
- Subjects
- *
ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *STABILITY theory , *LAGRANGE multiplier , *DERIVATIVES (Mathematics) - Abstract
We unify and extend some Newtonian iterative frameworks developed earlier in the literature, which results in a collection of convenient tools for local convergence analysis of various algorithms under various sets of assumptions including strong metric regularity, semistability, or upper-Lipschitz stability, the latter allowing for nonisolated solutions. These abstract schemes are further applied for deriving sharp local convergence results for some constrained optimization algorithms under reduced smoothness hypotheses. Specifically, we consider applications to the augmented Lagrangian method and to the linearly constrained Lagrangian method for problems with Lipschitzian derivatives but possibly without second derivatives, and our local convergence analysis for these methods improves all the existing theories of this kind. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
13. ON STRUCTURE AND COMPUTATION OF GENERALIZED NASH EQUILIBRIA.
- Author
-
DORSCH, D., JONGEN, H. TH., and SHIKHMAN, D. V.
- Subjects
- *
NASH equilibrium , *NONSMOOTH optimization , *LIPSCHITZ spaces , *STOCHASTIC convergence , *MANIFOLDS (Mathematics) - Abstract
We consider generalized Nash equilibrium problems (GNEP) from a structural and computational point of view. In GNEP the players' feasible sets may depend on the other players' strategies. Moreover, the players may share common constraints. In particular, the latter leads to the stable appearance of Nash equilibria which are Fritz-John (FJ) points, but not Karush-Kuhn-Tucker (KKT) points. Basic to our approach is the representation of FJ points as zeros of an appropriate underdetermined system of nonsmooth equations. Here, additional nonsmooth variables are taken into account. We prove that the set of FJ points (together with corresponding active Lagrange multipliers)--generically--constitutes a Lipschitz manifold. Its dimension is (N - 1) |J0|. where N is the number of players and |J0| is the number of active common constraints. In a structural analysis of Nash equilibria the number (AT -- 1) |J0| plays a crucial role. In fact, this number encodes both the possible degeneracies for the players' parametric subproblems and the dimension of the set of Nash equilibria. In particular, in the nondegenerate case, the dimension of the set of Nash equilibria locally equals (N - 1) |J0|. For the computation of FJ points we propose a nonsmooth projection method (NPM) which aims at finding solutions of an underdetermined system of nonsmooth equations. NPM is shown to be well defined for GNEP. Local convergence of NPM is conjectured for GNEP under generic assumptions and its proof is challenging. However, we indicate special cases (known from the literature) in which convergence holds. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. On Convergence of the Additive Schwarz Preconditioned Inexact Newton Method.
- Author
-
Heng-Bin An
- Subjects
- *
NEWTON-Raphson method , *ITERATIVE methods (Mathematics) , *MATHEMATICS , *NONLINEAR evolution equations , *EQUATIONS , *ALGEBRA , *STOCHASTIC convergence , *MATHEMATICAL functions - Abstract
The additive Schwarz preconditioned inexact Newton (ASPIN) method was recently introduced [X.-C. Cai and D. E.\ Keyes, {\it SIAM J. Sci.\ Comput.}, 24 (2002), pp. 183--200] to solve the systems of nonlinear equations with nonbalanced nonlinearities. Although the ASPIN method has successfully been used to solve some difficult nonlinear equations, its convergence property has not been studied since it was proposed. In this paper, the convergence property of the ASPIN method is studied, and the obtained result shows that this method is locally convergent. Furthermore, the convergence rate for the ASPIN method is discussed and the obtained result is similar to that of the inexact Newton method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
15. Line Search Filter Methods for Nonlinear Programming: Local Convergence.
- Author
-
Wächter, Andreas and Biegler, Lorenz T.
- Subjects
- *
NONLINEAR programming , *MANAGEMENT science , *MATHEMATICAL programming , *ALGORITHMS , *MATHEMATICAL optimization , *OPERATIONS research , *STOCHASTIC convergence - Abstract
A line search method is proposed for nonlinear programming using Fletcher and Leyffer's filter method, which replaces the traditional merit function. A simple modification of the method proposed in a companion paper [SIAM J. Optim., 16 (2005), pp. 1--31] introducing second order correction steps is presented. It is shown that the proposed method does not suffer from the Maratos effect, so that fast local convergence to second order sufficient local solutions is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
16. STABILITY PROPERTIES FOR A COMPACTLY SUPPORTED PRESCALE FUNCTION.
- Author
-
Dobric, V., Gundy, R. F., and Hitczenko, P.
- Subjects
- *
INTEGRAL functions , *STOCHASTIC convergence , *NUMERICAL analysis , *STRUCTURAL stability , *MATHEMATICS - Abstract
We show that if ø is a continuous, minimally supported prescale function, then its translates are linearly independent on any set of positive measure in the unit interval. This generalizes results of Y. Meyer and P. G. Lemarié. This result implies that a stability condition, introduced by Gundy and Kazarian for the study of local convergence of spline wavelet expansions, is satisfied for all expansions arising from multiresolution analyses generated by such prescale functions ø. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.