105 results on '"Local convergence"'
Search Results
2. Newton's Iteration Method for Solving the Nonlinear Matrix Equation X + ∑ i = 1 m A i * X − 1 A i = Q.
- Author
-
Li, Chang-Zhou, Yuan, Chao, and Cui, An-Gang
- Subjects
- *
NONLINEAR equations , *APPROXIMATION error , *NEWTON-Raphson method - Abstract
In this paper, we study the nonlinear matrix equation (NME) X + ∑ i = 1 m A i * X − 1 A i = Q . We transform this equation into an equivalent zero-point equation, then we use Newton's iteration method to solve the equivalent equation. Under some mild conditions, we obtain the domain of approximation solutions and prove that the sequence of approximation solutions generated by Newton's iteration method converges to the unique solution of this equation. In addition, the error estimation of the approximation solution is given. Finally, the comparison of two well-known approaches with Newton's iteration method by some numerical examples demonstrates the superiority of Newton's iteration method in the convergence speed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Unified Convergence Criteria of Derivative-Free Iterative Methods for Solving Nonlinear Equations.
- Author
-
Regmi, Samundra, Argyros, Ioannis K., Shakhno, Stepan, and Yarmola, Halyna
- Subjects
NONLINEAR equations ,NEWTON-Raphson method ,BANACH spaces ,OPERATOR equations ,DIFFERENCE equations ,ITERATIVE methods (Mathematics) - Abstract
A local and semi-local convergence is developed of a class of iterative methods without derivatives for solving nonlinear Banach space valued operator equations under the classical Lipschitz conditions for first-order divided differences. Special cases of this method are well-known iterative algorithms, in particular, the Secant, Kurchatov, and Steffensen methods as well as the Newton method. For the semi-local convergence analysis, we use a technique of recurrent functions and majorizing scalar sequences. First, the convergence of the scalar sequence is proved and its limit is determined. It is then shown that the sequence obtained by the proposed method is bounded by this scalar sequence. In the local convergence analysis, a computable radius of convergence is determined. Finally, the results of the numerical experiments are given that confirm obtained theoretical estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. About a fixed‐point‐type transformation to solve quadratic matrix equations using the Krasnoselskij method.
- Author
-
Hernández‐Verón, Miguel Ángel and Romero‐Álvarez, Natalia
- Subjects
- *
NEWTON-Raphson method , *QUADRATIC equations - Abstract
In this paper, we study the simplest quadratic matrix equation: Q(X)=X2+BX+C=0$$ \mathcal{Q}(X)={X}^2+ BX+C=0 $$. We transform this equation into an equivalent fixed‐point equation, and based on it, we construct the Krasnoselskij method. From this transformation, we can obtain iterative schemes more accurate than the successive approximation method. Moreover, under suitable conditions, we establish different results for the existence and localization of a solution for this equation with the Krasnoselskij method. Finally, we see numerically that the predictor–corrector iterative scheme, with the Krasnoselskij method as a predictor and the Newton method as corrector method, can improve the numerical application of the Newton method when approximating a solution of the quadratic matrix equation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Distributed Newton Optimization With Maximized Convergence Rate.
- Author
-
Marelli, Damian Edgardo, Xu, Yong, Fu, Minyue, and Huang, Zenghong
- Subjects
- *
TELECOMMUNICATION systems , *NEWTON-Raphson method , *DISTRIBUTED algorithms , *LINEAR programming - Abstract
The distributed optimization problem is set up in a collection of nodes interconnected via a communication network. The goal is to find the minimizer of a global objective function formed by the sum of local functions known at individual nodes. A number of methods, having different advantages, are available for addressing this problem. The goal of this article is to achieve the maximum possible convergence rate. As the first step toward this end, we propose a new method, which we show converges faster than other available options. As the second step toward our goal, we complement the proposed method with a fully distributed method for estimating the optimal step size that maximizes the convergence rate. We provide theoretical guarantees for the convergence of the resulting method in a neighborhood of the solution. We present numerical experiments showing that, when using the same step size, our method converges significantly faster than its rivals. Experiments also show that the distributed step-size estimation method achieves an asymptotic convergence rate very close to the theoretical maximum. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. A Class of Higher-Order Newton-Like Methods for Systems of Nonlinear Equations.
- Author
-
Sharma, Janak Raj, Kumar, Sunil, and Argyros, Ioannis K.
- Subjects
NONLINEAR equations ,NEWTON-Raphson method ,BANACH spaces - Abstract
In this paper, a class of efficient iterative methods with increasing order of convergence for solving systems of nonlinear equations is developed and analyzed. The methodology uses well-known third-order Potra–Pták iteration in the first step and Newton-like iterations in the subsequent steps. Novelty of the methods is the increase in convergence order by an amount three per step at the cost of only one additional function evaluation. In addition, the algorithm uses a single inverse operator in each iteration, which makes it computationally more efficient and attractive. Local convergence is studied in the more general setting of a Banach space under suitable assumptions. Theoretical results of convergence and computational efficiency are verified through numerical experimentation. Comparison of numerical results indicates that the developed algorithms outperform the other similar algorithms available in the literature, particularly when applied to solve the large systems of equations. The basins of attraction of some of the existing methods along with the proposed method are given to exhibit their performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Ball Convergence for a Multi-Step Harmonic Mean Newton-Like Method in Banach Space.
- Author
-
Behl, Ramandeep, Alshormani, Ali Saleh, and Argyros, Ioannis K.
- Subjects
NEWTON-Raphson method ,BANACH spaces ,NONLINEAR equations - Abstract
In this paper, we present a local convergence analysis of some iterative methods to approximate a locally unique solution of nonlinear equations in a Banach space setting. In the earlier study [Babajee et al. (2015) "On some improved harmonic mean Newton-like methods for solving systems of nonlinear equations," Algorithms8(4), 895–909], demonstrate convergence of their methods under hypotheses on the fourth-order derivative or even higher. However, only first-order derivative of the function appears in their proposed scheme. In this study, we have shown that the local convergence of these methods depends on hypotheses only on the first-order derivative and the Lipschitz condition. In this way, we not only expand the applicability of these methods but also proposed the theoretical radius of convergence of these methods. Finally, a variety of concrete numerical examples demonstrate that our results even apply to solve those nonlinear equations where earlier studies cannot apply. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Numerical analysis for the quadratic matrix equations from a modification of fixed‐point type.
- Author
-
Hernández‐Verón, Miguel A. and Romero, Natalia
- Subjects
- *
QUADRATIC equations , *NUMERICAL analysis , *NEWTON-Raphson method , *SYLVESTER matrix equations - Abstract
In this paper, we study the quadratic matrix equations. To improve the application of iterative schemes, we use a transform of the quadratic matrix equation into an equivalent fixed‐point equation. Then, we consider an iterative process of Chebyshev‐type to solve this equation. We prove that this iterative scheme is more efficient than Newton's method. Moreover, we obtain a local convergence result for this iterative scheme. We finish showing, by an application to noisy Wiener‐Hopf problems, that the iterative process considered is computationally more efficient than Newton's method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Local convergence analysis of inverse iteration algorithm for computing the H-spectral radius of a nonnegative weakly irreducible tensor.
- Author
-
Sheng, Zhou, Ni, Qin, and Yuan, Gonglin
- Subjects
- *
NEWTON-Raphson method , *RADIUS (Geometry) , *ALGORITHMS , *NONLINEAR equations , *INVERSE functions - Abstract
Abstract In this paper, we present an inverse iteration algorithm, to find the H-spectral radius and the associated positive eigenvector of a nonnegative weakly irreducible tensor, which always preserve the positivity of approximate eigenvectors. The local quadratical convergence of the proposed algorithm is established based on a basic result of the Newton's method for solving nonlinear equations. Some numerical examples illustrate the efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. A quasi-Newton modified LP-Newton method.
- Author
-
Martínez, María de los Ángeles and Fernández, Damián
- Subjects
- *
NEWTON-Raphson method , *NONLINEAR equations , *QUASI-Newton methods - Abstract
We consider a method to solve constrained system of nonlinear equations based on a modification of the Linear-Programming-Newton method and replacing the first-order information with a quasi-Newton secant update, providing a computationally simple method. The proposed strategy combines good properties of two methods: the least change secant update for unconstrained system of nonlinear equations with isolated solutions and the Linear-Programming-Newton for constrained nonlinear system of equations with possible nonisolated solutions. We analyse the local convergence of the proposed method under a standard error bound condition proving its linear convergence for nonisolated solutions. Numerical experiments were done in order to show the claimed convergence rate. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. Different methods for solving STEM problems.
- Author
-
Argyros, Ioannis K., Magreñán, Á. A., Orcos, L., Sarría, Íñígo, and Sicilia, Juan Antonio
- Subjects
- *
BANACH spaces , *NEWTON-Raphson method , *PROBLEM solving , *NONLINEAR equations - Abstract
We first present a local convergence analysis for some families of fourth and six order methods in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. Earlier studies have used hypotheses on the fourth Fréchet-derivative of the operator involved. We use hypotheses only on the first Fréchet-derivative in one local convergence analysis. This way, the applicability of these methods is extended. Moreover, the radius of convergence and computable error bounds on the distances involved are also given in this study based on Lipschitz constants. Numerical examples illustrating the theoretical results are also presented in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. A globally convergent Levenberg-Marquardt method for equality-constrained optimization.
- Author
-
Izmailov, A. F., Solodov, M. V., and Uskov, E. I.
- Subjects
NONLINEAR equations ,NEWTON-Raphson method ,LAGRANGE multiplier ,GLOBALIZATION ,MATHEMATICAL analysis ,MATHEMATICAL optimization - Abstract
It is well-known that the Levenberg-Marquardt method is a good choice for solving nonlinear equations, especially in the cases of singular/nonisolated solutions. We first exhibit some numerical experiments with local convergence, showing that this method for "generic" equations actually also works very well when applied to the specific case of the Lagrange optimality system, i.e., to the equation given by the first-order optimality conditions for equality-constrained optimization. In particular, it appears to outperform not only the basic Newton method applied to such systems, but also its modifications supplied with dual stabilization mechanisms, intended specially for tackling problems with nonunique Lagrange multipliers. The usual globalizations of the Levenberg-Marquardt method are based on linesearch for the squared Euclidean residual of the equation being solved. In the case of the Lagrange optimality system, this residual does not involve the objective function of the underlying optimization problem (only its derivative), and in particular, the resulting globalization scheme has no preference for converging to minima versus maxima, or to any other stationary point. We thus develop a special globalization of the Levenberg-Marquardt method when it is applied to the Lagrange optimality system, based on linesearch for a smooth exact penalty function of the optimization problem, which in particular involves the objective function of the problem. The algorithm is shown to have appropriate global convergence properties, preserving also fast local convergence rate under weak assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. On the local convergence study for an efficient k-step iterative method.
- Author
-
Amat, S., Argyros, I.K., Busquier, S., Hernández-Verón, M.A., and Martínez, E.
- Subjects
- *
NEWTON-Raphson method , *STOCHASTIC convergence , *ITERATIVE methods (Mathematics) , *DERIVATIVES (Mathematics) , *MATHEMATICAL decomposition - Abstract
This paper is devoted to a family of Newton-like methods with frozen derivatives used to approximate a locally unique solution of an equation. The methods have high order of convergence but only using first order derivatives. Moreover only one LU decomposition is required in each iteration. In particular, the methods are real alternatives to the classical Newton method. We present a local convergence analysis based on hypotheses only on the first derivative. These types of local results were usually proved based on hypotheses on the derivative of order higher than two although only the first derivative appears in these types of methods (Bermúdez et al., 2012; Petkovic et al., 2013; Traub, 1964). We apply these methods to an equation related to the nonlinear complementarity problem. Finally, we find the most efficient method in the family for this problem and we perform a theoretical and a numerical study for it. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. An inexact Newton-like conditional gradient method for constrained nonlinear systems.
- Author
-
Gonçalves, M.L.N. and Oliveira, F.R.
- Subjects
- *
NONLINEAR systems , *NONLINEAR equations , *ANALYTIC functions , *STOCHASTIC convergence , *NEWTON-Raphson method - Abstract
In this paper, we propose an inexact Newton-like conditional gradient method for solving constrained systems of nonlinear equations. The local convergence of the new method as well as results on its rate are established by using a general majorant condition. Two applications of such condition are provided: one is for functions whose derivatives satisfy a Hölder-like condition and the other is for functions that satisfy a Smale condition, which includes a substantial class of analytic functions. Some preliminary numerical experiments illustrating the applicability of the proposed method are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. A study on the local convergence and dynamics of the two-step and derivative-free Kung-Traub’s method.
- Author
-
Veiseh, Hana, Lotfi, Taher, and Allahviranloo, Tofigh
- Subjects
STOCHASTIC convergence ,DERIVATIVES (Mathematics) ,NONLINEAR operator equations ,POLYNOMIALS ,NEWTON-Raphson method - Abstract
We present a local convergence analysis of a two-step and derivative-free Kung-Traub’s method, which is based on a parameter and has fourth order of convergence. Using basins of attraction of the method, dynamical behavior of the scheme is studied and the best choice of the parameter is found in the sense of reliability and stability. Some illustrative examples show that as the parameter gets close to zero, radius of convergence of the method becomes larger. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. The local and semilocal convergence analysis of new Newton-like iteration methods.
- Author
-
KARAKAYA, Vatan, DOĞAN, Kadri, ATALAN, Yunus, and BOUZARA, Nour El Houda
- Subjects
- *
NEWTON-Raphson method , *SEMILOCAL rings , *STOCHASTIC convergence , *ALGORITHMS , *PICARD schemes - Abstract
The aim of this paper is to find new iterative Newton-like schemes inspired by the modified Newton iterative algorithm and prove that these iterations are faster than the existing ones in the literature. We further investigate their behavior and finally illustrate the results by numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. Approximations and generalized Newton methods.
- Author
-
Klatte, Diethard and Kummer, Bernd
- Subjects
- *
NEWTON-Raphson method , *STOCHASTIC convergence , *APPROXIMATION algorithms , *NONLINEAR analysis , *METRIC spaces - Abstract
We present approaches to (generalized) Newton methods in the framework of generalized equations 0∈f(x)+M(x)
, where f is a function andM is a multifunction. The Newton steps are defined by approximations f^of f and the solutions of 0∈f^(x)+M(x). We give a unified view of the local convergence analysis of such methods by connecting a certain type of approximation with the desired kind of convergence and different regularity conditions for f+M . Our paper is, on the one hand, thought as a survey of crucial parts of the topic, where we mainly use concepts and results of the monograph (Klatte and Kummer in Nonsmooth equations in optimization: regularity, calculus, methods and applications, Kluwer Academic Publishers, Dordrecht, 2002 ). On the other hand, we present original results and new features. They concern the extension of convergence results via Newton maps (Klatte and Kummer in Nonsmooth equations in optimization: regularity, calculus, methods and applications, Kluwer Academic Publishers, Dordrecht,2002 ; Kummer, in: Oettli, Pallaschke (eds) Advances in optimization, Springer, Berlin,1992 ) from equations to generalized equations both for linear and nonlinear approximations f^, and relations between semi-smoothness, Newton maps and directional differentiability of f . We give a Kantorovich-type statement, valid for all sequences of Newton iterates under metric regularity, and recall and extend results on multivalued approximations for general inclusions 0∈F(x). Equations with continuous, non-Lipschitzian f are considered, too. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
18. Improving the accessibility of Steffensen’s method by decomposition of operators.
- Author
-
Hernández-Verón, M.A. and Martínez, Eulalia
- Subjects
- *
ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *NONDIFFERENTIABLE functions , *NEWTON-Raphson method , *ALGORITHMS - Abstract
Solving equations of the form H ( x ) = 0 is usually done by applying iterative methods. The main interest of this paper is to improve the domain of starting points for Steffensen’s method. In general, the accessibility of iterative methods that use divided differences in their algorithms is reduced, since there are difficulties in the choice of starting points to guarantee the convergence of the methods. In particular, by using a decomposition of the operator H and applying a special type of iterative methods, which combine two iterative schemes in the algorithms, we can improve the accessibility of Steffensen’s method. Moreover, we analyze the local convergence of the new iterative method proposed in two cases: when H is differentiable and H is non-differentiable. The dynamical properties show that the method also improves the region of accessibility of Steffensen’s method for non-differentiable operators. So, we present an alternative for the non-applicability of Newton’s method to non-differentiable operators that improves the accessibility of Steffensen’s method. The theoretical results are illustrated with numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. ON THE LOCAL CONVERGENCE OF WEIGHTED-NEWTON METHODS UNDER WEAK CONDITIONS IN BANACH SPACES.
- Author
-
Argyros, Ioannis K., Sharma, Janak Raj, and Kumar, Deepak
- Subjects
BANACH spaces ,NEWTON-Raphson method ,TAYLOR'S series ,STOCHASTIC convergence ,UNIQUENESS (Mathematics) - Abstract
In this paper, we consider the weighted-Newton methods developed in [18] and study their local convergence in Banach space. In the earlier study the Taylor expansion of higher order derivatives is used which may not exist or may be very expensive or impossible to compute. However, the hypotheses of present analysis are based on the first Fréchetderivative only, thereby the applicability of methods is expanded. New analysis also provides radius of convergence, error bounds and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches that use Taylor expansions of higher order derivatives. Order of convergence of the methods is calculated by using computational order of convergence or approximate computational order of convergence without using higher order derivatives. Numerical tests are performed on some problems of different nature that confirm the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2018
20. Extended Traub–Woźniakowski convergence and complexity of Newton iteration in Banach space.
- Author
-
Argyros, I.K. and Silva, G.N.
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *LIPSCHITZ spaces , *COMMUNICATION , *NUMERICAL analysis - Abstract
An optimal convergence condition for Newton iteration is presented which is at least as weak as the one obtained by Traub and Woźniakowski leading also to an at least as precise complexity. The novelty of the paper is the introduction of a restricted convergence domain. That is we find a more precise location where the Newton iterates lie than in earlier studies. Consequently the Lipschitz constants are at least as small as the ones used before. This way and under the same computational cost, we extend the local convergence as well as the complexity of Newton iteration. Numerical examples further justify the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. A SUPERQUADRATIC VARIANT OF NEWTON'S METHOD.
- Author
-
POTRA, FLORIAN A.
- Subjects
- *
NEWTON-Raphson method , *OPERATOR equations , *BANACH spaces , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
This paper presents a Q-superquadratically convergent version of Newton's method for solving operator equations in Banach spaces that requires only one operator value and one inverse of the Fréchet derivative per iteration. The R-order of convergence is at least 1+√2. Both a semilocal and a local analysis of the new method are given. The semilocal analysis is done along the lines of the Newton-Kantorovich theorem and provides sufficient conditions for the existence of a solution and the convergence of the iterates. The Q-superquadratic convergence is obtained by assuming that the second Fréchet derivative is Lipschitz continuous. The local analysis assumes that a solution exists and shows that the method converges from any starting point belonging to an explicitly defined neighborhood of the solution called the ball of attraction. To our knowledge this is the first superquadratically convergent method that requires about the same work per iteration as Newton's method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. Extended local convergence analysis of inexact Gauss-Newton method for singular systems of equations under weak conditions.
- Author
-
Argyros, Ioannis K. and George, Santhosh
- Subjects
GAUSS-Bonnet theorem ,FUNCTIONAL equations ,NEWTON-Raphson method - Abstract
A new local convergence analysis of the Gauss-Newton method for solving some optimization problems is presented using restricted convergence domains. The results extend the applicability of the Gauss-Newton method under the same computational cost given in earlier studies. In particular, the advantages are: the error estimates on the distances involved are tighter and the convergence ball is at least as large. Moreover, the majorant function in contrast to earlier studies is not necessarily differentiable. Numerical examples are also provided in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. A significant improvement of a family of secant-type methods.
- Author
-
Ezquerro, J.A., Hernández-Verón, M.A., Magreñán, Á.A., and Moysi, A.
- Subjects
- *
NEWTON-Raphson method , *NONLINEAR equations , *OPERATING costs , *HAMMERSTEIN equations - Abstract
Secant-type iterative methods are usually used to solve nonlinear systems of equations where the operator involved is nondifferentiable or its derivative is costly. From a uniparametric family of secant-type methods, that is reduced to the secant method and Newton's method for some particular values of the parameter involved, with operational cost similar to that of the secant and Newton methods and superlinear convergence, we construct a biparametric family of iterative methods with quadratic convergence, study their local convergence, their efficiency and, based on the dynamics of the methods, their accessibility. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. On the Superlinear Convergence of Newton's Method on Riemannian Manifolds.
- Author
-
Fernandes, Teles, Ferreira, Orizon, and Yuan, Jinyun
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *RIEMANNIAN manifolds , *VECTOR fields , *DERIVATIVES (Mathematics) - Abstract
In this paper, we study Newton's method for finding a singularity of a differentiable vector field defined on a Riemannian manifold. Under the assumption of invertibility of the covariant derivative of the vector field at its singularity, we show that Newton's method is well defined in a suitable neighborhood of this singularity. Moreover, we show that the sequence generated by Newton's method converges to the solution with superlinear rate. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Unifying semilocal and local convergence of Newton's method on Banach space with a convergence structure.
- Author
-
Argyros, Ioannis K., Behl, Ramandeep, and Motsa, S.S.
- Subjects
- *
BANACH spaces , *STOCHASTIC convergence , *NEWTON-Raphson method , *DERIVATIVES (Mathematics) , *MATHEMATICAL sequences - Abstract
We present a semilocal and local convergence analysis of Newton's method on a Banach space with a convergence structure to locate zeros of operators. P. Meyer introduced the concept of a Banach space with a convergence structure. Using this setting, he presented a finer semilocal convergence analysis for Newton's method than in related studies using the real norm theory. In all these studies the operator involved as well as its Fréchet derivative is bounded above by the same bound-operator. In the present study, we introduce a second bound operator which is a special case of the bound-operator leading to tighter majorizing sequences for Newton's method. Using this more flexible combination of bound-operators, we improve the results in the earlier studies. In the semilocal case, we obtain under the same or weaker sufficient convergence conditions more precise error bounds on the distances involved and in the local case not considered in the earlier studies, we obtain a larger radius of convergence. This way we expand the applicability of Newton's method. Some numerical examples are also provided to show the superiority of the new results over the old results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Estimating the Local Radius of Convergence for Picard Iteration.
- Author
-
Măruşter, Ştefan
- Subjects
- *
STOCHASTIC convergence , *FIXED point theory , *HILBERT space , *NEWTON-Raphson method , *ALGORITHMS - Abstract
In this paper, we propose an algorithm to estimate the radius of convergence for the Picard iteration in the setting of a real Hilbert space. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best ones. As the algorithm does not require to evaluate the norm of derivatives, the computing effort is relatively low. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. A Newton conditional gradient method for constrained nonlinear systems.
- Author
-
Gonçalves, Max L.N. and Melo, Jefferson G.
- Subjects
- *
NEWTON-Raphson method , *NONLINEAR systems , *NONLINEAR equations , *NONLINEAR functions , *ANALYTIC functions - Abstract
In this paper, we consider the problem of solving constrained systems of nonlinear equations. We propose an algorithm based on a combination of Newton and conditional gradient methods, and establish its local convergence analysis. Our analysis is set up by using a majorant condition technique, allowing us to prove, in a unified way, convergence results for two large families of nonlinear functions. The first one includes functions whose derivative satisfies a Hölder-like condition, and the second one consists of a substantial subclass of analytic functions. Some preliminary numerical experiments are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. Newton’s method on generalized Banach spaces.
- Author
-
Argyros, Ioannis K., Behl, Ramandeep, and Motsa, S.S.
- Subjects
- *
NEWTON-Raphson method , *BANACH spaces , *STOCHASTIC convergence , *KANTOROVICH method , *MATHEMATICAL bounds - Abstract
We present a weaker convergence analysis of Newton’s method than in Kantorovich and Akilov (1964), Meyer (1987), Potra and Ptak (1984), Rheinboldt (1978), Traub (1964) on a generalized Banach space setting to approximate a locally unique zero of an operator. This way we extend the applicability of Newton’s method. Moreover, we obtain under the same conditions in the semilocal case weaker sufficient convergence criteria; tighter error bounds on the distances involved and an at least as precise information on the location of the solution. In the local case we obtain a larger radius of convergence and higher error estimates on the distances involved. Numerical examples illustrate the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. Incomplete Newton-Ulm method for large scale nonlinear equations.
- Author
-
Wang, H., Chen, F., Liu, H., and Wang, Q.
- Subjects
- *
NONLINEAR equations , *NEWTON-Raphson method , *JACOBIAN matrices , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
This paper presents an incomplete Newton-Ulm method (INU) for nonlinear equations. This method uses parts of elements of Jacobian matrix to obtain the next iteration point and does not contain inverse operators in its expression. We discuss and analyze the convergence conditions and semilocal convergence of the new method. Some special INU algorithms are designed and numerical experiments are given. Numerical results show that INU method is effective for large scale nonlinear equations. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
30. A new highly efficient and optimal family of eighth-order methods for solving nonlinear equations.
- Author
-
Behl, Ramandeep, Argyros, Ioannis K., and Motsa, S.S.
- Subjects
- *
OPTIMAL control theory , *NONLINEAR equations , *PROBLEM solving , *APPROXIMATION theory , *NEWTON-Raphson method - Abstract
The principle aim of this manuscript is to present a new highly efficient and optimal eighth-order family of iterative methods to solve nonlinear equations in the case of simple roots. The derivation of this scheme is based on weight function and rational approximation approaches. The proposed family requires only four functional evaluations (viz. f ( x n ) f ′( x n ) f ( y n ) and f ( z n )) per iteration. Therefore, the proposed family is optimal in the sense of Kung–Traub hypotheses. In addition, we given a theorem which describing the order of convergence of the proposed family. Moreover, we present a local convergence analysis using hypotheses only on the first-order derivative, since in our preceding theorem we used hypotheses on higher-order derivatives that do not appear in these methods. In this way, we expand the applicability of these methods even further. Furthermore, a variety of nonlinear equations is considered for the numerical experiments. It is observed from the numerical experiments that our proposed methods perform better than the existing optimal methods of same order, when the accuracy is checked in multi precision digits. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
31. Local Convergence and the Dynamics of a Two-Step Newton-Like Method.
- Author
-
Argyros, Ioannis K. and Magreñán, Á. Alberto
- Subjects
- *
STOCHASTIC convergence , *DYNAMICS , *NEWTON-Raphson method , *MULTIPLICITY (Mathematics) , *NONLINEAR equations - Abstract
We present the local convergence analysis and the study of the dynamics of a two-step Newton-like method in order to approximate a locally unique solution of multiplicity one of a nonlinear equation. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
32. On a result by Dennis and Schnabel for Newton’s method: Further improvements.
- Author
-
Argyros, Ioannis K. and George, Santhosh
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *ITERATIVE methods (Mathematics) , *NUMERICAL analysis , *MATHEMATICAL bounds - Abstract
We improve local convergence results for Newton’s method by defining a more precise domain where the Newton iterates lie than in earlier studies using Dennis and Schnabel-type techniques. A numerical example is presented to show that the new convergence radii are larger and new error bounds are more precise than the earlier ones. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
33. Improved convergence analysis for Newton-like methods.
- Author
-
Magreñán, Ángel and Argyros, Ioannis
- Subjects
- *
STOCHASTIC convergence , *BANACH spaces , *NEWTON-Raphson method , *MATHEMATICAL models , *ITERATIVE methods (Mathematics) - Abstract
We present a new semilocal convergence analysis for Newton-like methods in order to approximate a locally unique solution of an equation in a Banach space setting. This way, we expand the applicability of these methods in cases not covered in earlier studies. The advantages of our approach include a more precise convergence analysis under the same computational cost on the Lipschitz constants involved. Applications are also given in this study to show that our estimates on the distances involved are tighter than the older ones. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. Ball convergence for a novel sixth order iterative method under hypothesis only on the first derivative.
- Author
-
Argyros, Ioannis K. and George, Santhosh
- Subjects
- *
ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *NEWTON-Raphson method , *NONLINEAR equations , *MATHEMATICAL bounds - Abstract
We present a local convergence analysis of a sixth order iterative method using only the first derivative for solving equations in order to approximate a solution of a nonlinear equation. In earlier studies such as [28] hypotheses on the fifth derivatives are used. Thus by using only first crivative, we extended the applicability of these methods. Moreover the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2016
35. BALL CONVERGENCE OF AN ITERATIVE METHOD FOR NONLINEAR EQUATIONS BASED ON THE DECOMPOSITION TECHNIQUE UNDER WEAK CONDITIONS.
- Author
-
Argyros, Ioannis K. and George, Santhosh
- Subjects
ITERATIVE methods (Mathematics) ,NONLINEAR equations ,MATHEMATICAL decomposition ,NEWTON-Raphson method ,STOCHASTIC convergence - Abstract
In the present paper, we consider convergence analysis of a numerical method considered in Shah and Noor (2015) to solve equations using decomposition technique under weaker assumptions. Using the idea of restricted convergence domains we extend the applicability of this method. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2016
36. On the Local Convergence of a Third Order Family of Iterative Processes.
- Author
-
Hernández-Verón, M. A. and Romero, N.
- Subjects
- *
STOCHASTIC convergence , *ITERATIVE methods (Mathematics) , *NEWTON-Raphson method , *MATHEMATICAL optimization , *APPROXIMATION algorithms - Abstract
Efficiency is generally the most important aspect to take into account when choosing an iterative method to approximate a solution of an equation, but is not the only aspect to consider in the iterative process. Another important aspect to consider is the accessibility of the iterative process, which shows the domain of starting points from which the iterative process converges to a solution of the equation. So, we consider a family of iterative processes with a higher efficiency index than Newton's method. However, this family of proecsses presents problems of accessibility to the solution x*. From a local study of the convergence of this family, we perform an optimization study of the accessibility and obtain iterative processes with better accessibility than Newton's method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
37. Local Convergence of an Optimal Eighth Order Method under Weak Conditions.
- Author
-
Argyros, Ioannis K., Behl, Ramandeep, and Motsa, S. S.
- Subjects
- *
STOCHASTIC convergence , *MATHEMATICAL optimization , *NEWTON-Raphson method , *DERIVATIVES (Mathematics) , *NONLINEAR equations , *NUMERICAL analysis - Abstract
We study the local convergence of an eighth order Newton-like method to approximate a locally-unique solution of a nonlinear equation. Earlier studies, such as Chen et al. (2015) show convergence under hypotheses on the seventh derivative or even higher, although only the first derivative and the divided difference appear in these methods. The convergence in this study is shown under hypotheses only on the first derivative. Hence, the applicability of the method is expanded. Finally, numerical examples are also provided to show that our results apply to solve equations in cases where earlier studies cannot apply. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. Local convergence for an improved Jarratt-type method in Banach space.
- Author
-
Argyros, Ioannis K. and González, Daniel
- Subjects
STOCHASTIC convergence ,BANACH spaces ,NEWTON-Raphson method - Abstract
We present a local convergence analysis for an improved Jarratt-type methods of order at least five to approximate a solution of a nonlinear equation in a Banach space setting. The convergence ball and error estimates are given using hypotheses up to the first Fréchet derivative in contrast to earlier studies using hypotheses up to the third Fréchet derivative. Numerical examples are also provided in this study, where the older hypotheses are not satisfied to solve equations but the new hypotheses are satisfied. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. On the convergence of inexact two-point Newton-like methods on Banach spaces.
- Author
-
Argyros, Ioannis Konstantinos and Magreñán, Ángel Alberto
- Subjects
- *
NEWTON-Raphson method , *VECTOR spaces , *BANACH spaces , *EQUATIONS , *STOCHASTIC convergence - Abstract
We present a unified convergence analysis of Inexact Newton-like methods in order to approximate a locally unique solution of a nonlinear operator equation containing a nondifferentiable term in a Banach space setting. The convergence conditions are more general and the error analysis more precise than in earlier studies such as (Argyros, 2007; Cătinaş, 2005; Cătinaş, 1994; Chen and Yamamoto, 1989; Dennis, 1968; Hernández and Romero, 2005; Potra and Pták, 1984; Rheinboldt, 1977). Special cases of our results can be used to find zeros of derivatives. Numerical examples are also provided in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. Local convergence of Newton’s method in the classical calculus of variations.
- Author
-
Gockenbach, Mark and Liu, Chang
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *CALCULUS of variations , *QUADRATIC fields , *APPROXIMATION theory - Abstract
Sufficient conditions for a weak local minimizer in the classical calculus of variations can be expressed without reference to conjugate points. The local quadratic convergence of Newton’s method follows from these sufficient conditions. Newton’s method is applied in the minimization form; that is, the step is generated by minimizing the local quadratic approximation. This allows the extension to a globally convergent line search based algorithm (which will be presented in a future paper). [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
41. Ball convergence theorems for unified three step Newton-like methods of high convergence order.
- Author
-
Argyros, Ioannis K. and George, Santhosh
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *NONLINEAR equations , *DERIVATIVES (Mathematics) , *MATHEMATICAL bounds , *APPROXIMATION theory - Abstract
We present a local convergence analysis for eighth-order variants of Newton's method in order to approximate a solution of a nonlinear equation. We use hypotheses up to the first derivative in contrast to earlier studies such as [7]-[11], [20] using hypotheses up to the seventh derivative. This way the applicability of these methods is extended under weaker hypotheses. Moreover the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2015
42. Solving Wiener–Hopf problems via an efficient iterative scheme.
- Author
-
Hernández-Verón, M.A. and Romero, N.
- Subjects
- *
NEWTON-Raphson method , *QUADRATIC equations , *PROBLEM solving , *ALGEBRAIC equations , *NONNEGATIVE matrices , *RICCATI equation - Abstract
In the analysis of the fluid queues, it is necessary to obtain the nonnegative solution of a nonsymmetric algebraic Riccati matrix equation. Under suitable conditions, this solution can be obtained transforming algebraic Riccati equations into unilateral quadratic matrix equations. In this paper, we use an efficient iterative scheme to approximate a solution of this quadratic matrix equation. We improve the efficiency and the accuracy of Newton's method, widely used in the literature. Moreover, a local convergence result is proved. Finally, we apply this efficient method to approximate the solution of a particular noisy Wiener–Hopf problem and we compare it with Newton's method. Moreover, a predictor–corrector iterative scheme is constructed that improve the accessibility of the aforesaid method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Two-step Newton methods.
- Author
-
Magreñán Ruiz, Ángel Alberto and Argyros, Ioannis K.
- Subjects
- *
NEWTON-Raphson method , *STOCHASTIC convergence , *NONLINEAR equations , *BANACH spaces , *HAMMERSTEIN equations , *APPLIED mathematics , *MATHEMATICAL analysis - Abstract
Abstract: We present sufficient convergence conditions for two-step Newton methods in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. The advantages of our approach over other studies such as Argyros et al. (2010) [5], Chen et al. (2010) [11], Ezquerro et al. (2000) [16], Ezquerro et al. (2009) [15], Hernández and Romero (2005) [18], Kantorovich and Akilov (1982) [19], Parida and Gupta (2007) [21], Potra (1982) [23], Proinov (2010) [25], Traub (1964) [26] for the semilocal convergence case are: weaker sufficient convergence conditions, more precise error bounds on the distances involved and at least as precise information on the location of the solution. In the local convergence case more precise error estimates are presented. These advantages are obtained under the same computational cost as in the earlier stated studies. Numerical examples involving Hammerstein nonlinear integral equations where the older convergence conditions are not satisfied but the new conditions are satisfied are also presented in this study for the semilocal convergence case. In the local case, numerical examples and a larger convergence ball are obtained. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
44. Relaxed secant-type methods.
- Author
-
Argyros, Ioannis K. and Magreçán, Á. Alberto
- Subjects
- *
APPROXIMATION theory , *NONLINEAR equations , *STOCHASTIC convergence , *NEWTON-Raphson method , *NUMERICAL analysis , *BOUNDARY value problems - Abstract
We present a unified local and semilocal convergence analysis for secant-type methods in order to approximate a locally unique solution of a nonlinear equation in a Banach space setting. Our analysis includes the computation of the bounds on the limit points of the majorizing sequences involved. Under the same computational cost, using both Lipschtiz and center Lipschitz conditions, our convergence criteria can be: weaker; the error bounds more precise and the convergence balls larger than in earlier studies. Special cases such us Newton's method or Secant method are also presented. Numerical examples, including a Chandrasekhar equation and a boundary value problem, are also presented to illustrate the theoretical results obtained in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2014
45. An Algorithm Derivative-Free to Improve the Steffensen-Type Methods.
- Author
-
Hernández-Verón, Miguel A., Yadav, Sonia, Magreñán, Ángel Alberto, Martínez, Eulalia, and Singh, Sukhjit
- Subjects
- *
NEWTON-Raphson method , *PROBLEM solving - Abstract
Solving equations of the form H (x) = 0 is one of the most faced problem in mathematics and in other science fields such as chemistry or physics. This kind of equations cannot be solved without the use of iterative methods. The Steffensen-type methods, defined using divided differences are derivative free, are usually considered to solve these problems when H is a non-differentiable operator due to its accuracy and efficiency. However, in general, the accessibility of these iterative methods is small. The main interest of this paper is to improve the accessibility of Steffensen-type methods, this is the set of starting points that converge to the roots applying those methods. So, by means of using a predictor–corrector iterative process we can improve this accessibility. For this, we use a predictor iterative process, using symmetric divided differences, with good accessibility and then, as corrector method, we consider the Center-Steffensen method with quadratic convergence. In addition, the dynamical studies presented show, in an experimental way, that this iterative process also improves the region of accessibility of Steffensen-type methods. Moreover, we analyze the semilocal convergence of the predictor–corrector iterative process proposed in two cases: when H is differentiable and H is non-differentiable. Summing up, we present an effective alternative for Newton's method to non-differentiable operators, where this method cannot be applied. The theoretical results are illustrated with numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. On the local convergence of Newton’s method under generalized conditions of Kantorovich
- Author
-
Ezquerro, J.A., González, D., and Hernández, M.A.
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *KANTOROVICH method , *MATHEMATICAL proofs , *ITERATIVE methods (Mathematics) , *FUNCTIONAL analysis - Abstract
Abstract: Following an idea similar to that given by Dennis and Schnabel (1996) in [2], we prove a local convergence result for Newton’s method under generalized conditions of Kantorovich type. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
47. Local convergence of Newton's method under a majorant condition in Riemannian manifolds.
- Author
-
FERREIRA, ORIZON P. and SILVA, ROBERTO C. M.
- Subjects
STOCHASTIC convergence ,NEWTON-Raphson method ,RIEMANNIAN manifolds ,VECTOR fields ,LIPSCHITZ spaces ,DERIVATIVES (Mathematics) - Abstract
A local convergence analysis of Newton's method for finding a singularity of a differentiable vector field defined on a complete Riemannian manifold, based on the majorant principle, is presented in this paper. This analysis provides a clear relationship between the majorant function, which relaxes the Lipschitz continuity of the derivative, and the vector field under consideration. It also allows us to obtain the optimal convergence radius and the biggest range for the uniqueness of the solution and to unify some previously unrelated results. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
48. The convergence of the perturbed Newton method and its application for ill-conditioned problems
- Author
-
Peris, R., Marquina, A., and Candela, V.
- Subjects
- *
STOCHASTIC convergence , *PERTURBATION theory , *NEWTON-Raphson method , *NP-complete problems , *DEGREES of freedom , *MATHEMATICAL analysis - Abstract
Abstract: Iterative methods, such as Newton’s, behave poorly when solving ill-conditioned problems: they become slow (first order), and decrease their accuracy. In this paper we analyze deeply and widely the convergence of a modified Newton method, which we call perturbed Newton, in order to overcome the usual disadvantages Newton’s one presents. The basic point of this method is the dependence of a parameter affording a degree of freedom that introduces regularization. Choices for that parameter are proposed. The theoretical analysis will be illustrated through examples. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
49. Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results.
- Author
-
Cartis, Coralia, Gould, Nicholas I. M., and Toint, Philippe L.
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *NEWTON-Raphson method , *LIPSCHITZ spaces , *MATHEMATICAL analysis , *MATHEMATICS - Abstract
n Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177-205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413-431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
50. Local convergence analysis of the Gauss–Newton method under a majorant condition
- Author
-
Ferreira, O.P., Gonçalves, M.L.N., and Oliveira, P.R.
- Subjects
- *
STOCHASTIC convergence , *NONLINEAR statistical models , *LEAST squares , *NEWTON-Raphson method , *COMPUTATIONAL complexity - Abstract
Abstract: The Gauss–Newton method for solving nonlinear least squares problems is studied in this paper. Under the hypothesis that the derivative of the function associated with the least square problem satisfies a majorant condition, a local convergence analysis is presented. This analysis allows us to obtain the optimal convergence radius and the biggest range for the uniqueness of stationary point, and to unify two previous and unrelated results. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.