46 results on '"Constrained optimization"'
Search Results
2. Local convergence of a trust-region algorithm with line search filter technique for nonlinear constrained optimization.
- Author
-
Pei, Yonggang and Zhu, Detong
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *CONSTRAINED optimization , *NONLINEAR programming , *ITERATIVE methods (Mathematics) , *NUMERICAL analysis - Abstract
A trust-region algorithm in association with line search filter technique for solving nonlinear equality constrained programming is proposed in this paper. In the current iteration, the trial step providing sufficient descent is generated by solving a corresponding trust-region subproblem. Then, the step size is decided by backtracking line search together with filter technique to obtain the next iteration point. The advantage of this method is that resolving trust-region subproblem many times to determine a new iteration point in traditional trust-region method can be avoided and hence the expensive computation can be lessened. And the difficult decisions in regard to the choice of penalty parameters in the merit functions can be avoided by using filter technique. Second order correction steps are introduced in the proposed algorithm to overcome Maratos effect. Convergence analysis shows that fast local convergence can be achieved under some mild assumptions. The preliminary numerical results are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
3. Generalizations of the limited-memory BFGS method based on the quasi-product form of update
- Author
-
Vlček, J. and Lukšan, L.
- Subjects
- *
NEWTON-Raphson method , *CONSTRAINED optimization , *VECTOR analysis , *ITERATIVE methods (Mathematics) , *NUMERICAL analysis , *STOCHASTIC convergence , *SMOOTHNESS of functions - Abstract
Abstract: Two families of limited-memory variable metric or quasi-Newton methods for unconstrained minimization based on the quasi-product form of update are derived. As for the first family, four variants how to utilize the Strang recurrences for the Broyden class of variable metric updates are investigated; three of them use the same number of stored vectors as the limited-memory BFGS method. Moreover, one of the variants does not require any additional matrix by vector multiplication. The second family uses vectors from the preceding iteration to construct a new class of variable metric updates. The resulting methods again require neither any additional matrix by vector multiplication nor any additional stored vector. Global convergence of four of the presented methods is established for convex sufficiently smooth functions. Numerical results indicate that two of the new methods can save computational time substantially for certain problems. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
4. Global and local convergence of a penalty-free method for nonlinear programming
- Author
-
Chen, Zhongwen and Qiu, Songqiang
- Subjects
- *
STOCHASTIC convergence , *NONLINEAR programming , *ALGORITHMS , *CONSTRAINED optimization , *FEASIBILITY studies , *NUMERICAL analysis - Abstract
Abstract: We present a class of trust region algorithms that do not use any penalty function or a filter for nonlinear equality constrained optimization. In each iteration, the infeasibility is controlled by a progressively decreasing upper limit and trial steps are computed by a Byrd–Omojokun-type trust region strategy. Measures of optimality and infeasibility are computed, whose relationship serves as a criterion on which the algorithm decides which one to focus on improving. As a result, the algorithm keeps a balance between the improvements on optimality and feasibility even if no restoration phase which is required by filter methods is used. The framework of the algorithm ensures the global convergence without assuming regularity or boundedness on the iterative sequence. By using a second order correction strategy, Marato’s effect is avoided and fast local convergence is shown. The preliminary numerical results are reported. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
5. A restoration-free filter SQP algorithm for equality constrained optimization
- Author
-
Zhu, Xiaojing and Pu, Dingguo
- Subjects
- *
QUADRATIC programming , *ALGORITHMS , *CONSTRAINED optimization , *PROBLEM solving , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a trust-region sequential quadratic programming algorithm with a modified filter acceptance mechanism is proposed for nonlinear equality constrained optimization. The most important advantage of the proposed algorithm is its avoidance of any feasibility restoration phase, a necessity in traditional filter methods. We solve quadratic programming subproblems based on the well-known Byrd–Omojokun trust-region method. Inexact solutions to these subproblems are allowed. Under some standard assumptions, global convergence of the proposed algorithm is established. Numerical results show our approach is potentially useful. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
6. A derivative-free method for solving box-constrained underdetermined nonlinear systems of equations
- Author
-
Echebest, N., Schuverdt, M.L., and Vignau, R.P.
- Subjects
- *
CONSTRAINED optimization , *NONLINEAR systems , *NUMERICAL solutions to nonlinear differential equations , *NEWTON-Raphson method , *STOCHASTIC convergence , *NUMERICAL analysis , *MATHEMATICAL formulas - Abstract
Abstract: A derivative-free iterative method for solving bound-constrained underdetermined nonlinear systems is presented. The procedure consists of a quasi-Newton method that uses the Broyden update formula and a globalized line search that combines the strategy of Grippo, Lampariello and Lucidi with the Li and Fukushima one. Global convergence results are proved and numerical experiments are presented. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
7. A practical PR+ conjugate gradient method only using gradient
- Author
-
Dong, Yunda
- Subjects
- *
CONJUGATE gradient methods , *CONSTRAINED optimization , *STOCHASTIC convergence , *GLOBAL analysis (Mathematics) , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, we consider the Polak–Ribière (or Polak–Ribière plus) conjugate gradient method for solving optimality condition of an unconstrained minimization problem. We give two new steplength rules only using gradient, and under gradient-Lipschitz assumption prove this method’s global convergence correspondingly. Then, we develop a practical Polak–Ribière plus method whose steplength is located by one inequality only using gradient, and report promising numerical results on high accuracy solution for some standard test problems when compared to the state-of-art methods in this research direction. Importantly, our work provides a new idea of devising a practical version of the celebrated Polak–Ribière (or Polak–Ribière plus) method. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
8. Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization
- Author
-
Narushima, Yasushi and Yabe, Hiroshi
- Subjects
- *
CONJUGATE gradient methods , *SEARCH algorithms , *CONSTRAINED optimization , *INFORMATION theory , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: Conjugate gradient methods have been paid attention to, because they can be directly applied to large-scale unconstrained optimization problems. In order to incorporate second order information of the objective function into conjugate gradient methods, Dai and Liao (2001) proposed a conjugate gradient method based on the secant condition. However, their method does not necessarily generate a descent search direction. On the other hand, Hager and Zhang (2005) proposed another conjugate gradient method which always generates a descent search direction. In this paper, combining Dai–Liao’s idea and Hager–Zhang’s idea, we propose conjugate gradient methods based on secant conditions that generate descent search directions. In addition, we prove global convergence properties of the proposed methods. Finally, preliminary numerical results are given. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
9. Convergence analysis of an iterative algorithm for a class of constrained dynamic problems
- Author
-
Wang, Jin, Modnak, Chairat, and Hou, Gene
- Subjects
- *
STOCHASTIC convergence , *ITERATIVE methods (Mathematics) , *ALGORITHMS , *CONSTRAINED optimization , *ERROR analysis in mathematics , *MATHEMATICAL analysis , *NUMERICAL analysis - Abstract
Abstract: In this paper, we consider numerical simulation to a class of constrained dynamic problems where the overall dynamics are determined by the interactions between two sub-systems. We present an iterative algorithm that naturally decouples the computation of the two sub-systems and that ensures an accurate and efficient solution procedure. We conduct rigorous error analysis for the convergence of the iterative algorithm, and verify the analytical results through careful numerical tests. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
10. A multiplier active-set trust-region algorithm for solving constrained optimization problem
- Author
-
El-Sobky, Bothina
- Subjects
- *
MULTIPLIERS (Mathematical analysis) , *ALGORITHMS , *CONSTRAINED optimization , *STOCHASTIC convergence , *ITERATIVE methods (Mathematics) , *MATHEMATICAL sequences , *NUMERICAL analysis - Abstract
Abstract: A new trust-region algorithm for solving a constrained optimization problem is introduced. In this algorithm, an active set strategy is used together with multiplier method to convert the computation of the trial step to easy trust-region subproblem similar to this for the unconstrained case. A convergence theory for this algorithm is presented. Under reasonable assumptions, it is shown that the algorithm is globally convergent. In particular, it is shown that, in the limit, a subsequence of the iteration sequence satisfies one of four types of stationary conditions. Namely, the infeasible Mayer–Bliss conditions, Fritz John’s conditions, the infeasible Fritz John’s conditions or KKT conditions. Preliminary numerical experiment on the algorithm is presented. The performance of the algorithm is reported. The numerical results show that our approach is of value and merit further investigation. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
11. A conjugate directions approach to improve the limited-memory BFGS method
- Author
-
Vlček, J. and Lukšan, L.
- Subjects
- *
CONJUGATE gradient methods , *CONSTRAINED optimization , *VECTOR analysis , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *ALGORITHMS , *NUMERICAL analysis - Abstract
Abstract: Simple modifications of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections (derived from the idea of conjugate directions) of the used difference vectors, utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. Global convergence of the algorithm is established for convex sufficiently smooth functions. Numerical experiments indicate that the new method often improves the L-BFGS method significantly. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
12. Sequential quadratic programming with a flexible step acceptance strategy
- Author
-
Zhu, Xiaojing and Pu, Dingguo
- Subjects
- *
QUADRATIC programming , *SEQUENTIAL analysis , *NONLINEAR theories , *CONSTRAINED optimization , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: A new sequential quadratic programming (SQP) method for nonlinear inequality constrained optimization is proposed. The aim of this paper is to promote global convergence for SQP methods using a flexible step acceptance strategy which combines merit functions and filter techniques. Global convergence is proved under some reasonable assumptions and preliminary numerical results are reported. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
13. A new class of nonlinear conjugate gradient coefficients with global convergence properties
- Author
-
Rivaie, Mohd, Mamat, Mustafa, June, Leong Wah, and Mohd, Ismail
- Subjects
- *
NONLINEAR differential equations , *CONJUGATE gradient methods , *MATHEMATICAL constants , *GLOBAL analysis (Mathematics) , *STOCHASTIC convergence , *CONSTRAINED optimization , *NUMERICAL analysis - Abstract
Abstract: Nonlinear conjugate gradient (CG) methods have played an important role in solving large-scale unconstrained optimization. Their wide application in many fields is due to their low memory requirements and global convergence properties. Numerous studies and modifications have been conducted recently to improve this method. In this paper, a new class of conjugate gradient coefficients (βk ) that possess global convergence properties is presented. The global convergence result is established using exact line searches. Numerical result shows that the proposed formula is superior and more efficient when compared to other CG coefficients. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
14. A new penalty-free-type algorithm based on trust region techniques
- Author
-
Qiu, Songqiang and Chen, Zhongwen
- Subjects
- *
NONLINEAR theories , *ALGORITHMS , *CONSTRAINED optimization , *FEASIBILITY studies , *STOCHASTIC convergence , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, we propose a new penalty-free-type method for nonlinear equality constrained problems. The new algorithm uses trust region framework and feasibility safeguarding technique. Moreover, it has no choice of penalty parameter and penalty function as a merit function, and it does not use the filter technique to avoid the penalty function either. We analyze the global convergence of the main algorithm under the standard assumptions. The preliminary numerical tests are reported. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
15. A modified conjugate gradient algorithm with cyclic Barzilai–Borwein steplength for unconstrained optimization
- Author
-
Xiao, Yunhai, Song, Huina, and Wang, Zhiguo
- Subjects
- *
CONJUGATE gradient methods , *STOCHASTIC convergence , *CONSTRAINED optimization , *SEARCH algorithms , *NUMERICAL analysis , *COMPARATIVE studies - Abstract
For solving large-scale unconstrained minimization problems, the nonlinear conjugate gradient method is welcome due to its simplicity, low storage, efficiency and nice convergence properties. Among all the methods in the framework, the conjugate gradient descent algorithm — CG_DESCENT is very popular, in which the generated directions descend automatically, and this nice property is independent of any line search used. In this paper, we generalize CG_DESCENT with two Barzilai–Borwein steplength reused cyclically. We show that the resulting algorithm owns attractive sufficient descent property and converges globally under some mild conditions. We test the proposed algorithm by using a large set of unconstrained problems with high dimensions in CUTEr library. The numerical comparisons with the state-of-the-art algorithm CG_DESCENT illustrate that the proposed method is effective, competitive, and promising. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
16. A line search filter algorithm with inexact step computations for equality constrained optimization
- Author
-
Zhu, Xiaojing and Pu, Dingguo
- Subjects
- *
ALGORITHMS , *CONSTRAINED optimization , *NEWTON-Raphson method , *QUADRATIC programming , *LINEAR systems , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a new line search filter algorithm for equality constrained optimization is presented. The approach belongs to the class of inexact Newton-like methods. It can also be regarded as an inexact version of generic sequential quadratic programming (SQP) methods. The trial step is obtained by truncatedly solving the primal–dual system based on any robust and efficient linear system solver. Practical termination tests for the linear system solver are established to ensure global convergence. Preliminary numerical results demonstrate the approach is potentially useful. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
17. Some projection methods with the BB step sizes for variational inequalities
- Author
-
He, Hongjin, Han, Deren, and Li, Zhibao
- Subjects
- *
VARIATIONAL inequalities (Mathematics) , *GRAPHICAL projection , *CONSTRAINED optimization , *NONLINEAR theories , *STOCHASTIC convergence , *MONOTONIC functions , *MATHEMATICAL constants , *NUMERICAL analysis - Abstract
Abstract: Since the appearance of the Barzilai–Borwein (BB) step sizes strategy for unconstrained optimization problems, it received more and more attention of the researchers. It was applied in various fields of the nonlinear optimization problems and recently was also extended to optimization problems with bound constraints. In this paper, we further extend the BB step sizes to more general variational inequality (VI) problems, i.e., we adopt them in projection methods. Under the condition that the underlying mapping of the VI problem is strongly monotone and Lipschitz continuous and the modulus of strong monotonicity and the Lipschitz constant satisfy some further conditions, we establish the global convergence of the projection methods with BB step sizes. A series of numerical examples are presented, which demonstrate that the proposed methods are convergent under mild conditions, and are more efficient than some classical projection-like methods. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
18. A nonmonotone trust-region line search method for large-scale unconstrained optimization
- Author
-
Ahookhosh, Masoud, Amini, Keyvan, and Peyghami, Mohammad Reza
- Subjects
- *
CONSTRAINED optimization , *PROBLEM solving , *STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL analysis , *NUMERICAL analysis , *CRITICAL point (Thermodynamics) - Abstract
Abstract: We consider an efficient trust-region framework which employs a new nonmonotone line search technique for unconstrained optimization problems. Unlike the traditional nonmonotone trust-region method, our proposed algorithm avoids resolving the subproblem whenever a trial step is rejected. Instead, it performs a nonmonotone Armijo-type line search in direction of the rejected trial step to construct a new point. Theoretical analysis indicates that the new approach preserves the global convergence to the first-order critical points under classical assumptions. Moreover, superlinear and quadratic convergence are established under suitable conditions. Numerical experiments show the efficiency and effectiveness of the proposed approach for solving unconstrained optimization problems. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
19. A new neural network for solving nonlinear convex programs with linear constraints
- Author
-
Yang, Yongqing and Gao, Yun
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *NONLINEAR theories , *CONVEX programming , *CONSTRAINED optimization , *LYAPUNOV stability , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
20. An SQP-filter method for inequality constrained optimization and its global convergence
- Author
-
Wang, Xiangling, Zhu, Zhibin, Zuo, Shuangyong, and Huang, Qingqun
- Subjects
- *
FILTERS & filtration , *MATHEMATICAL inequalities , *CONSTRAINED optimization , *STOCHASTIC convergence , *QUADRATIC programming , *MATHEMATICAL analysis , *NUMERICAL analysis , *SEQUENTIAL analysis - Abstract
Abstract: In this paper, we combine the filter technique with a modified sequential quadratic programming (SQP) method. The optimization solution is obtained by reducing step length, which is obtained by an exact linear search. Furthermore, this method can start with an infeasible initial point. The method uses a filter to promote global convergence. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
21. New reformulation and feasible semismooth Newton method for a class of stochastic linear complementarity problems
- Author
-
Liu, Hongwei, Huang, Yakui, and Li, Xiangli
- Subjects
- *
MATHEMATICAL reformulation , *NONSMOOTH optimization , *NEWTON-Raphson method , *STOCHASTIC processes , *LINEAR complementarity problem , *CONSTRAINED optimization , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: A class of stochastic linear complementarity problems (SLCPs) with finitely many realizations is considered. We first formulate the problem as a new constrained minimization problem. Then, we propose a feasible semismooth Newton method which yields a stationary point of the constrained minimization problem. We study the condition for the level set of the objective function to be bounded. As a result, the condition for the solution set of the constrained minimization problem is obtained. The global and quadratic convergence of the proposed method is proved under certain assumptions. Preliminary numerical results show that this method yields a reasonable solution with high safety and within a small number of iterations. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
22. Nonmonotone filter DQMM method for the system of nonlinear equations
- Author
-
Gu, Chao
- Subjects
- *
CONSTRAINED optimization , *MATHEMATICAL optimization , *NUMERICAL solutions to equations , *NONLINEAR theories , *MONOTONIC functions , *NONLINEAR programming , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, we propose a nonmonotone filter Diagonalized Quasi-Newton Multiplier (DQMM) method for solving system of nonlinear equations. The system of nonlinear equations is transformed into a constrained nonlinear programming problem which is then solved by nonmonotone filter DQMM method. A nonmonotone criterion is used to speed up the convergence progress in some ill-conditioned cases. Under reasonable conditions, we give the global convergence properties. The numerical experiments are reported to show the effectiveness of the proposed algorithm. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
23. An active set limited memory BFGS algorithm for bound constrained optimization
- Author
-
Yuan, Gonglin and Lu, Xiwen
- Subjects
- *
CONSTRAINED optimization , *SET theory , *ALGORITHMS , *STOCHASTIC convergence , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, an active set limited BFGS algorithm is proposed for bound constrained optimization. The global convergence will be established under some suitable conditions. Numerical results show that the given method is effective. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
24. Modified active set projected spectral gradient method for bound constrained optimization
- Author
-
Xiao, Yun-Hai, Hu, Qing-Jie, and Wei, Zengxin
- Subjects
- *
CONSTRAINED optimization , *STOCHASTIC convergence , *NUMERICAL analysis , *SET theory , *MATHEMATICAL optimization , *EXPERIMENTS - Abstract
Abstract: In this paper, by means of an active set strategy, we present a projected spectral gradient algorithm for solving large-scale bound constrained optimization problems. A nice property of the active set estimation technique is that it can identify the active set at the optimal point without requiring strict complementary condition, which is potentially used to solve degenerated optimization problems. Under appropriate conditions, we show that this proposed method is globally convergent. We also do some numerical experiments by using some bound constrained problems from CUTEr library. The numerical comparisons with SPG, TRON, and L-BFGS-B show that the proposed method is effective and promising. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
25. Projected gradient iteration for nonlinear operator equation
- Author
-
Huang, Wei and Chen, Di-Rong
- Subjects
- *
ITERATIVE methods (Mathematics) , *NONLINEAR operator equations , *ALGORITHMS , *MATRIX norms , *STOCHASTIC convergence , *MATHEMATICAL sequences , *NUMERICAL analysis , *CONSTRAINED optimization - Abstract
Abstract: In this paper, we use a projected gradient algorithm to solve a nonlinear operator equation with -norm constraint. Gradient iterations with -norm constraints have been studied recently both in the context of inverse problem and of compressed sensing. In this paper, the constrained gradient iteration is implemented via a projected operator. We establish the -norm convergence of sequence constructed by the constrained gradient iteration when . The performance of the method is testified by a numerical example. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
26. -step quadratic convergence of the MPRP method with a restart strategy
- Author
-
Li, Dong-Hui and Tian, Bo-Shi
- Subjects
- *
QUADRATIC equations , *STOCHASTIC convergence , *CONJUGATE gradient methods , *LINEAR systems , *NUMERICAL analysis , *CONSTRAINED optimization , *MATHEMATICAL analysis - Abstract
Abstract: It is well-known that the PRP conjugate gradient method with exact line search is globally and linearly convergent. If a restart strategy is used, the convergence rate of the method can be an -step superlinear/quadratic convergence. Recently, Zhang et al. [L. Zhang, W. Zhou, D.H. Li, A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal. 26 (2006) 629–640] developed a modified PRP (MPRP) method that is globally convergent if an inexact line search is used. In this paper, we investigate the convergence rate of the MPRP method with inexact line search. We first show that the MPRP method with Armijo line search or Wolfe line search is linearly convergent. We then show that the MPRP method with a restart strategy still retains -step superlinear/quadratic convergence if the initial steplength is appropriately chosen. We also do some numerical experiments. The results show that the restart MPRP method does converge quadratically. Moreover, it is more efficient than the non-restart method. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
27. A reduced Hessian algorithm with line search filter method for nonlinear programming
- Author
-
Wang, Zhujun and Zhu, Detong
- Subjects
- *
ALGORITHMS , *NONLINEAR programming , *FILTERS (Mathematics) , *CONSTRAINED optimization , *STOCHASTIC convergence , *NUMERICAL analysis , *EXPERIMENTS - Abstract
Abstract: This paper proposes a line search filter reduced Hessian method for nonlinear equality constrained optimization. The feature of the presented algorithm is that the reduced Hessian method is used to produce a search direction, a backtracking line search procedure to generate step size, some filtered rules to determine step acceptance, second order correction technique to reduce infeasibility and overcome the Maratos effects. It is shown that this algorithm does not suffer from the Maratos effects by using second order correction step, and under mild assumptions fast convergence to second order sufficient local solutions is achieved. The numerical experiment is reported to show the effectiveness of the proposed algorithm. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
28. An improved strongly sub-feasible SSLE method for optimization problems and numerical experiments
- Author
-
Mo, Xing-de, Jian, Jin-bao, and Yang, Su-min
- Subjects
- *
NUMERICAL analysis , *STOCHASTIC convergence , *CONSTRAINED optimization , *DIFFERENTIAL equations , *MODULES (Algebra) , *MATHEMATICS , *LINEAR algebra - Abstract
Abstract: In this paper, the super-linearly and quadratically convergent strong sub-feasible method [J.L. Li, J.B. Jian, A superlinearly and quadratically convergent strongly subfeasible method for nonlinear inequality constrained optimization, OR Transactions, 7 (2) (2003) 21–34] for nonlinear inequality constrained optimization is improved, such that the iterative points can get into the feasible region after a finite number of iterations. As a result, a strict restricted condition can be overcome. Another two contributions of this paper are that a new bidirectional Armijo line search is presented and a lot of numerical comparison results are reported. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
29. Combining nonmonotone conic trust region and line search techniques for unconstrained optimization
- Author
-
Cui, Zhaocheng, Wu, Boying, and Qu, Shaojian
- Subjects
- *
CONSTRAINED optimization , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *ALGORITHMS , *NUMERICAL analysis , *MONOTONE operators - Abstract
Abstract: In this paper, we propose a trust region method for unconstrained optimization that can be regarded as a combination of conic model, nonmonotone and line search techniques. Unlike in traditional trust region methods, the subproblem of our algorithm is the conic minimization subproblem; moreover, our algorithm performs a nonmonotone line search to find the next iteration point when a trial step is not accepted, instead of resolving the subproblem. The global and superlinear convergence results for the algorithm are established under reasonable assumptions. Numerical results show that the new method is efficient for unconstrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
30. New spectral PRP conjugate gradient method for unconstrained optimization
- Author
-
Wan, Zhong, Yang, ZhanLu, and Wang, YaLin
- Subjects
- *
SPECTRAL theory , *CONSTRAINED optimization , *CONJUGATE gradient methods , *SEARCH algorithms , *ITERATIVE methods (Mathematics) , *GLOBAL analysis (Mathematics) , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained optimization problems, where the search direction was a kind of combination of the gradient and the obtained direction, and the steplength was obtained by the Wolfe-type inexact line search. It was proved that the search direction at each iteration is a descent direction of objective function. Under mild conditions, we have established the global convergence theorem of the proposed method. Numerical results showed that the algorithm is promising, particularly, compared with the existing several main methods. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
31. A nonmonotone adaptive trust region method for unconstrained optimization based on conic model
- Author
-
Zhang, Jian, Zhang, Kecun, and Qu, Shaojian
- Subjects
- *
MONOTONE operators , *CONSTRAINED optimization , *ADAPTIVE computing systems , *NUMERICAL analysis , *STOCHASTIC convergence , *MATHEMATICAL analysis - Abstract
Abstract: In this paper, we present a nonmonotone adaptive trust region method for unconstrained optimization based on conic model. The new method combines nonmonotone technique and a new way to determine trust region radius at each iteration. The local and global convergence properties are proved under reasonable assumptions. Numerical experiments show that our algorithm is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
32. A nonmonotone globalization algorithm with preconditioned gradient path for unconstrained optimization
- Author
-
Zhou, Qunyan, Sun, Wenyu, and Qi, Liqun
- Subjects
- *
MONOTONE operators , *GLOBAL analysis (Mathematics) , *CONSTRAINED optimization , *ALGORITHMS , *STOCHASTIC convergence , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Abstract: The aim of this paper is to incorporate the preconditioned gradient path in a nonmonotone stabilization algorithm for unconstrained optimization. The global convergence and locally superlinear convergence are established for this class of algorithms. Finally, we report in details the numerical results which show the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
33. A numerical solution of the constrained weighted energy problem
- Author
-
Chesnokov, Andrey, Deckers, Karl, and Van Barel, Marc
- Subjects
- *
NUMERICAL analysis , *CONSTRAINED optimization , *POTENTIAL theory (Mathematics) , *ALGORITHMS , *EIGENVALUES , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence - Abstract
Abstract: A numerical algorithm is presented to solve the constrained weighted energy problem from potential theory. As one of the possible applications of this algorithm, we study the convergence properties of the rational Lanczos iteration method for the symmetric eigenvalue problem. The constrained weighted energy problem characterizes the region containing those eigenvalues that are well approximated by the Ritz values. The region depends on the distribution of the eigenvalues, on the distribution of the poles, and on the ratio between the size of the matrix and the number of iterations. Our algorithm gives the possibility of finding the boundary of this region in an effective way. We give numerical examples for different distributions of poles and eigenvalues and compare the results of our algorithm with the convergence behavior of the explicitly performed rational Lanczos algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
34. A new feasible descent primal–dual interior point algorithm for nonlinear inequality constrained optimization
- Author
-
Jian, Jin-bao and Pan, Hua-qin
- Subjects
- *
CONSTRAINED optimization , *DUALITY theory (Mathematics) , *ALGORITHMS , *NONLINEAR theories , *MATHEMATICAL inequalities , *ITERATIVE methods (Mathematics) , *LINEAR differential equations , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a new feasible primal–dual interior point algorithm for solving inequality constrained optimization problems is presented. At each iteration, the algorithm solves only two or three reduced systems of linear equations with the same coefficient matrix. The searching direction is feasible and the object function is monotone decreasing. The proposed algorithm is proved to possess global and superlinear convergence under mild conditions. Finally, some numerical experiments with the algorithm are reported. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
35. Numerical comparisons of two effective methods for mixed complementarity problems
- Author
-
Chen, Jein-Shan, Pan, Shaohua, and Yang, Ching-Yu
- Subjects
- *
NUMERICAL analysis , *COMPARATIVE studies , *STOCHASTIC convergence , *LINEAR complementarity problem , *NONSMOOTH optimization , *CONSTRAINED optimization , *ITERATIVE methods (Mathematics) - Abstract
Abstract: Recently there have two different effective methods proposed by Kanzow et al. in (Kanzow, 2001 ) and (Kanzow and Petra, 2004 ), respectively, which commonly use the Fischer–Burmeister (FB) function to recast the mixed complementarity problem (MCP) as a constrained minimization problem and a nonlinear system of equations, respectively. They all remark that their algorithms may be improved if the FB function is replaced by other NCP functions. Accordingly, in this paper, we employ the generalized Fischer–Burmeister (GFB) where the 2-norm in the FB function is relaxed to a general -norm () for the two methods and investigate how much the improvement is by changing the parameter as well as which method is influenced more when we do so, by the performance profiles of iterations and function evaluations for the two methods with different on MCPLIB collection. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
36. A new two-step gradient-type method for large-scale unconstrained optimization
- Author
-
Farid, Mahboubeh, Leong, Wah June, and Hassan, Malik Abu
- Subjects
- *
CONSTRAINED optimization , *APPROXIMATION theory , *MATRICES (Mathematics) , *MONOTONE operators , *STOCHASTIC convergence , *NUMERICAL analysis , *COMPARATIVE studies - Abstract
Abstract: In this paper, we propose some improvements on a new gradient-type method for solving large-scale unconstrained optimization problems, in which we use data from two previous steps to revise the current approximate Hessian. The new method which we considered, resembles to that of Barzilai and Borwein (BB) method. The innovation features of this approach consist in using approximation of the Hessian in diagonal matrix form based on the modified weak secant equation rather than the multiple of the identity matrix in the BB method. Using this approach, we can obtain a higher order accuracy of Hessian approximation when compares to other existing BB-type method. By incorporating a simple monotone strategy, the global convergence of the new method is achieved. Practical insights into the effectiveness of the proposed method are given by numerical comparison with the BB method and its variant. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
37. A simple feasible SQP method for inequality constrained optimization with global and superlinear convergence
- Author
-
Jin, Zhong and Wang, Yuqing
- Subjects
- *
QUADRATIC programming , *SEQUENTIAL analysis , *MATHEMATICAL inequalities , *CONSTRAINED optimization , *STOCHASTIC convergence , *NUMERICAL solutions to equations , *NUMERICAL analysis - Abstract
Abstract: In this paper, a simple feasible SQP method for nonlinear inequality constrained optimization is presented. At each iteration, we need to solve one QP subproblem only. After solving a system of linear equations, a new feasible descent direction is designed. The Maratos effect is avoided by using a high-order corrected direction. Under some suitable conditions the global and superlinear convergence can be induced. In the end, numerical experiments show that the method in this paper is effective. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
38. A simulated annealing driven multi-start algorithm for bound constrained global optimization
- Author
-
Ali, M.M. and Gabere, M.N.
- Subjects
- *
SIMULATED annealing , *ALGORITHMS , *CONSTRAINED optimization , *GLOBAL analysis (Mathematics) , *ELIMINATION (Mathematics) , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: A derivative-free simulated annealing driven multi-start algorithm for continuous global optimization is presented. We first propose a trial point generation scheme in continuous simulated annealing which eliminates the need for the gradient-based trial point generation. We then suitably embed the multi-start procedure within the simulated annealing algorithm. We modify the derivative-free pattern search method and use it as the local search in the multi-start procedure. We study the convergence properties of the algorithm and test its performance on a set of 50 problems. Numerical results are presented which show the robustness of the algorithm. Numerical comparisons with a gradient-based simulated annealing algorithm and three population-based global optimization algorithms show that the new algorithm could offer a reasonable alternative to many currently available global optimization algorithms, specially for problems requiring ‘direct search’ type algorithm. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
39. Convergence and stability of numerical solutions to a class of index 1 stochastic differential algebraic equations with time delay
- Author
-
Qu, Xiaomei, Huang, Chengming, and Liu, Chao
- Subjects
- *
STOCHASTIC convergence , *STABILITY (Mechanics) , *NUMERICAL solutions to stochastic differential equations , *TIME delay systems , *THETA functions , *CONSTRAINED optimization , *NUMERICAL analysis - Abstract
Abstract: In this paper, we study the convergence and stability of the stochastic theta method (STM) for a class of index 1 stochastic delay differential algebraic equations. First, in the case of constrained mesh, i.e., the stepsize is a submultiple of the delay, it is proved that the method is strongly consistent and convergent with order 1/2 in the mean-square sense. Then, the result is further extended to the case of non-constrained mesh where we employ linear interpolation to approximate the delay argument. Later, under a sufficient condition for mean-square stability of the analytical solution, it is proved that, when the stepsizes are sufficiently small, the STM approximations reproduce the stability of the analytical solution. Finally, some numerical experiments are presented to illustrate the theoretical findings. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
40. Adaptive reduced-rank LCMV beamforming algorithms based on joint iterative optimization of filters: Design and analysis
- Author
-
de Lamare, R.C., Wang, L., and Fa, R.
- Subjects
- *
ADAPTIVE filters , *BEAMFORMING , *CONSTRAINED optimization , *COMPUTER algorithms , *MATHEMATICAL optimization , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *ERROR analysis in mathematics , *NUMERICAL analysis - Abstract
Abstract: This paper presents reduced-rank linearly constrained minimum variance (LCMV) beamforming algorithms based on joint iterative optimization of filters. The proposed reduced-rank scheme is based on a constrained joint iterative optimization of filters according to the minimum variance criterion. The proposed optimization procedure adjusts the parameters of a projection matrix and an adaptive reduced-rank filter that operates at the output of the bank of filters. We describe LCMV expressions for the design of the projection matrix and the reduced-rank filter. We then describe stochastic gradient and develop recursive least-squares adaptive algorithms for their efficient implementation along with automatic rank selection techniques. An analysis of the stability and the convergence properties of the proposed algorithms is presented and semi-analytical expressions are derived for predicting their mean squared error (MSE) performance. Simulations for a beamforming application show that the proposed scheme and algorithms outperform in convergence and tracking the existing full-rank and reduced-rank algorithms while requiring comparable complexity. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
41. A descent algorithm without line search for unconstrained optimization
- Author
-
Zhou, Guangming
- Subjects
- *
ALGORITHMS , *THEORY of descent (Mathematics) , *CONSTRAINED optimization , *SEARCH theory , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, a new descent algorithm for solving unconstrained optimization problem is presented. Its search direction is descent and line search procedure can be avoided except for the first iteration. It is globally convergent under mild conditions. The search direction of the new algorithm is generalized and convergence of corresponding algorithm is also proved. Numerical results show that the algorithm is efficient for given test problems. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
42. A conjugate gradient method with descent direction for unconstrained optimization
- Author
-
Yuan, Gonglin, Lu, Xiwen, and Wei, Zengxin
- Subjects
- *
CONJUGATE gradient methods , *CONSTRAINED optimization , *METHOD of steepest descent (Numerical analysis) , *STOCHASTIC convergence , *NUMERICAL analysis , *GLOBAL analysis (Mathematics) - Abstract
Abstract: A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
43. A self-adaptive trust region method with line search based on a simple subproblem model
- Author
-
Sang, Zhaoyang and Sun, Qingying
- Subjects
- *
CONSTRAINED optimization , *PROBLEM solving , *FUNCTIONAL analysis , *COMPUTATIONAL complexity , *MATHEMATICAL models , *RADIUS (Geometry) , *STOCHASTIC convergence , *NUMERICAL analysis - Abstract
Abstract: In this paper, based on a simple model of the trust region subproblem, we propose a new self-adaptive trust region method with a line search technique for solving unconstrained optimization problems. By use of the simple subproblem model, the new method needs less memory capacitance and computational complexity. And the trust region radius is adjusted with a new self-adaptive adjustment strategy which makes full use of the information at the current point. When the trial step results in an increase in the objective function, the method does not resolve the subproblem, but it performs a line search technique from the failed point. Convergence properties of the method are proved under certain conditions. Numerical experiments show that the new method is effective and attractive for large-scale optimization problems. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
44. An ODE-based trust region method for unconstrained optimization problems
- Author
-
Ou, Yigui, Zhou, Qian, and Lin, Haichan
- Subjects
- *
NUMERICAL solutions to linear differential equations , *PROBLEM solving , *CONSTRAINED optimization , *COMPUTER algorithms , *ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *NUMERICAL analysis , *MATHEMATICAL proofs - Abstract
Abstract: In this paper, a new trust region algorithm is proposed for solving unconstrained optimization problems. This method can be regarded as a combination of trust region technique, fixed step-length and ODE-based methods. A feature of this proposed method is that at each iteration, only a system of linear equations is solved to obtain a trial step. Another is that when a trial step is not accepted, the method generates an iterative point whose step-length is defined by a formula. Under some standard assumptions, it is proven that the algorithm is globally convergent and locally superlinear convergent. Preliminary numerical results are reported. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
45. Constrained optimization based on modified differential evolution algorithm
- Author
-
Mohamed, Ali Wagdy and Sabry, Hegazy Zaher
- Subjects
- *
CONSTRAINED optimization , *DIFFERENTIAL evolution , *ALGORITHMS , *VECTOR analysis , *ROBUST control , *RANDOM variables , *NONLINEAR theories , *NUMERICAL analysis , *STOCHASTIC convergence - Abstract
Abstract: This paper presents a novel Constrained Optimization based on Modified Differential Evolution algorithm (COMDE). In the new algorithm, a new directed mutation rule, based on the weighted difference vector between the best and the worst individuals at a particular generation, is introduced. The new directed mutation rule is combined with the modified basic mutation strategy DE/rand/1/bin, where only one of the two mutation rules is applied with the probability of 0.5. The proposed mutation rule is shown to enhance the local search ability of the basic Differential Evolution (DE) and to get a better trade-off between convergence rate and robustness. Two new scaling factors are introduced as uniform random variables to improve the diversity of the population and to bias the search direction. Additionally, a dynamic non-linear increased crossover probability is utilized to balance the global exploration and local exploitation. COMDE also includes a modified constraint handling technique based on feasibility and the sum of constraints violations. A new dynamic tolerance technique to handle equality constraints is also adopted. The effectiveness and benefits of the new directed mutation strategy and modified basic strategy used in COMDE has been experimentally investigated. The effect of the parameters of the crossover probability function and the parameters of the dynamic tolerance equation on the performance of COMDE have been analyzed and evaluated by different experiments. Numerical experiments on 13 well-known benchmark test functions and five engineering design problems have shown that the new approach is efficient, effective and robust. The comparison results between the COMDE and the other 28 state-of-the-art evolutionary algorithms indicate that the proposed COMDE algorithm is competitive with, and in some cases superior to, other existing algorithms in terms of the quality, efficiency, convergence rate, and robustness of the final solution. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
46. Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions
- Author
-
Kang, Fei, Li, Junjie, and Ma, Zhenyue
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *NUMERICAL analysis , *STOCHASTIC convergence , *EVOLUTIONARY computation , *CONSTRAINED optimization , *MATHEMATICAL complexes , *INFORMATION science - Abstract
Abstract: A Rosenbrock artificial bee colony algorithm (RABC) that combines Rosenbrock’s rotational direction method with an artificial bee colony algorithm (ABC) is proposed for accurate numerical optimization. There are two alternative phases of RABC: the exploration phase realized by ABC and the exploitation phase completed by the rotational direction method. The proposed algorithm was tested on a comprehensive set of complex benchmark problems, encompassing a wide range of dimensionality, and it was also compared with several algorithms. Numerical results show that the new algorithm is promising in terms of convergence speed, success rate, and accuracy. The proposed RABC is also capable of keeping up with the direction changes in the problems. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.