148 results on '"Convergence (routing)"'
Search Results
2. Gradient methods with memory
- Author
-
Nesterov, Yurii, Florea, Mihai I., UCL - SST/ICTM/INMA - Pôle en ingénierie mathématique, and UCL - SSH/LIDAM/CORE - Center for operations research and econometrics
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Computer science ,Applied Mathematics ,Computation ,0211 other engineering and technologies ,Linear model ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Oracle ,Optimization and Control (math.OC) ,Convex optimization ,Convergence (routing) ,FOS: Mathematics ,Optimization methods ,68Q25, 65Y20, 90C25 ,Differentiable function ,0101 mathematics ,Convex function ,Mathematics - Optimization and Control ,Software - Abstract
In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the information obtained at the previous iterations in order to accelerate the convergence towards the optimal solution. This information is used in the form of a piece-wise linear model of the objective function, which provides us with much better prediction abilities as compared with the standard linear model. To the best of our knowledge, this approach was never really applied in Convex Minimization to differentiable functions in view of the high complexity of the corresponding auxiliary problems. However, we show that all necessary computations can be done very efficiently. Consequently, we get new optimization methods, which are better than the usual Gradient Methods both in the number of oracle calls and in the computational time. Our theoretical conclusions are confirmed by preliminary computational experiments., Comment: This is an Accepted Manuscript of an article published by Taylor \& Francis in Optimization Methods and Software on 13 Jan 2021, available at https://www.tandfonline.com/doi/10.1080/10556788.2020.1858831
- Published
- 2021
3. Global convergence of a new sufficient descent spectral three-term conjugate gradient class for large-scale optimization
- Author
-
Mohammad Reza Eslahchi and S. Bojari
- Subjects
Class (set theory) ,021103 operations research ,Control and Optimization ,Scale (ratio) ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Unconstrained optimization ,01 natural sciences ,Term (time) ,Conjugate gradient method ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Software ,Descent (mathematics) ,Mathematics - Abstract
To solve a large-scale unconstrained optimization problem, in this paper we propose a class of spectral three-term conjugate gradient methods. We indicate that the proposed class, in fact, generate...
- Published
- 2020
4. New versions of Newton method: step-size choice, convergence domain and under-determined equations
- Author
-
Boris T. Polyak and Andrey Tremba
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,SMA ,01 natural sciences ,Domain (mathematical analysis) ,symbols.namesake ,Nonlinear system ,Convergence (routing) ,symbols ,Applied mathematics ,0101 mathematics ,Newton's method ,Software ,Mathematics - Abstract
Newton method is one of the most powerful methods for finding solutions of nonlinear equations and for proving their existence. In its ‘pure’ form it has fast convergence near the solution, but sma...
- Published
- 2019
5. Accelerated multiple step-size methods for solving unconstrained optimization problems
- Author
-
Predrag S. Stanimirović, Branislav Ivanov, Snežana Djordjević, Gradimir V. Milovanović, and Ivona Brajevic
- Subjects
Mathematical optimization ,Control and Optimization ,Line search ,Iterative method ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,Unconstrained optimization ,01 natural sciences ,010101 applied mathematics ,Transformation (function) ,Convergence (routing) ,0101 mathematics ,Software ,Mathematics - Abstract
Two transformations of gradient-descent iterative methods for solving unconstrained optimization are proposed. The first transformation is called modification and it is defined using a small enlargement of the step size in various gradient-descent methods. The second transformation is termed as hybridization and it is defined as a composition of gradient-descent methods with the Picard–Mann hybrid iterative process. As a result, several accelerated gradient-descent methods for solving unconstrained optimization problems are presented, investigated theoretically and numerically compared. The proposed methods are globally convergent for uniformly convex functions satisfying certain condition under the assumption that the step size is determined by the backtracking line search. In addition, the convergence on strictly convex quadratic functions is discussed. Numerical comparisons show better behaviour of the proposed methods with respect to some existing methods in view of the Dolan and Moré's performance profile with respect to all analysed characteristics: number of iterations, the CPU time, and the number of function evaluations.
- Published
- 2019
6. Modified gradient dynamic approach to the tensor complementarity problem
- Author
-
Liqun Qi, Xuezhong Wang, Yimin Wei, and Maolin Che
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,Activation function ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Dynamical system ,01 natural sciences ,Nonlinear system ,Complementarity theory ,Tensor (intrinsic definition) ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Software ,Mathematics - Abstract
Nonlinear gradient dynamic approach for solving the tensor complementarity problem (TCP) are presented. Theoretical analysis shows that each of the defined dynamical system models ensures the conve...
- Published
- 2019
7. Complexity of gradient descent for multiobjective optimization
- Author
-
Luís Nunes Vicente, Jörg Fliege, A. I. F. Vaz, and Universidade do Minho
- Subjects
Global rates ,Mathematical optimization ,Gradient descent ,Science & Technology ,021103 operations research ,Control and Optimization ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,Multiobjective optimization ,gradient descent ,steepest descent ,global rates ,worst-case complexity ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Multi-objective optimization ,Worst-case complexity ,Criticality ,Convergence (routing) ,Steepest descent ,0101 mathematics ,Software ,Mathematics - Abstract
Published online: 29 Aug 2018, A number of first-order methods have been proposed for smooth multiobjective optimization for which some form of convergence to first-order criticality has been proved. Such convergence is global in the sense of being independent of the starting point. In this paper, we analyse the rate of convergence of gradient descent for smooth unconstrained multiobjective optimization, and we do it for non-convex, convex, and strongly convex vector functions. These global rates are shown to be the same as for gradient descent in single-objective optimization and correspond to appropriate worstcase complexity bounds. In the convex cases, the rates are given for implicit scalarizations of the problem vector function., Support for A.I.F. Vaz was partially provided by FCT [grant number COMPETE:POCI-01- 0145-FEDER-007043], [grant number UID/CEC/00319/2013], and support for L.N. Vicente was partially provided by FCT [grant number UID/MAT/00324/2013], [grant number P2020 SAICTPAC/0011/2015.]
- Published
- 2018
8. A sharp augmented Lagrangian-based method in constrained non-convex optimization
- Author
-
Adil M. Bagirov, Refail Kasimbeyli, Gurkan Ozturk, Anadolu Üniversitesi, Mühendislik Fakültesi, Endüstri Mühendisliği Bölümü, and Kasımbeyli, Refail
- Subjects
Non-Smooth Optimization ,021103 operations research ,Control and Optimization ,Optimization problem ,Augmented Lagrangian method ,Discrete Gradient Method ,Applied Mathematics ,Constrained Optimization ,0211 other engineering and technologies ,Constrained optimization ,Sharp Augmented Lagrangian ,010103 numerical & computational mathematics ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Nonlinear system ,Modified Subgradient Algorithm ,Non-Convex Optimization ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Global optimization ,Software ,Inner loop ,Mathematics - Abstract
4th International Conference on Computational and Experimental Science and Engineering (ICCESEN) -- OCT 04-08, 2017 -- Kemer, TURKEY, WOS: 000463781100002, In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature., Australian Research Council's Discovery Projects funding scheme [DP140103213], The research by A.M. Bagirov was supported under Australian Research Council's Discovery Projects funding scheme (Project No. DP140103213).
- Published
- 2018
9. A strong convergence theorem for monotone inclusion and minimization problems in complete CAT(0) spaces
- Author
-
C. C. Okeke and Chinedu Izuchukwu
- Subjects
Control and Optimization ,Applied Mathematics ,010102 general mathematics ,Minimization problem ,01 natural sciences ,010101 applied mathematics ,Proximal point ,Monotone polygon ,Fixed point problem ,Convergence (routing) ,Applied mathematics ,Minification ,0101 mathematics ,Inclusion (education) ,Software ,Mathematics - Abstract
In this paper, a Halpern-type proximal point algorithm for approximating a common solution of monotone inclusion problem, minimization problem (MP) and fixed point problem is introduced. Using our ...
- Published
- 2018
10. Nonlinear iterative methods for solving the split common null point problem in Banach spaces
- Author
-
Prasit Cholamjiak, Yekini Shehu, and Suthep Suantai
- Subjects
021103 operations research ,Control and Optimization ,Iterative method ,Applied Mathematics ,010102 general mathematics ,0211 other engineering and technologies ,Banach space ,02 engineering and technology ,01 natural sciences ,Nonlinear system ,Scheme (mathematics) ,Resolvent operator ,Convergence (routing) ,Null point ,Applied mathematics ,0101 mathematics ,Software ,Mathematics - Abstract
In this work, we study the split common null point problem in the framework of Banach spaces. We propose an iterative scheme for solving the problem and then prove strong convergence theorem of the sequences generated by our iterative scheme under suitable conditions. We finally provide some numerical examples to support the main theorem.
- Published
- 2018
11. A modified Hestenes–Stiefel conjugate gradient method with an optimal property
- Author
-
Parvaneh Faramarzi, Nasrin Pirfalah, and Keyvan Amini
- Subjects
021103 operations research ,Control and Optimization ,Line search ,Property (programming) ,Wolfe line search ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Unconstrained optimization ,HS algorithm ,01 natural sciences ,Conjugate gradient method ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Software ,Descent (mathematics) ,Mathematics - Abstract
In this paper, based on the numerical efficiency of Hestenes–Stiefel (HS) method, a new modified HS algorithm is proposed for unconstrained optimization. The new direction independent of the line search satisfies in the sufficient descent condition. Motivated by theoretical and numerical features of three-term conjugate gradient (CG) methods proposed by Narushima et al., similar to Dai and Kou approach, the new direction is computed by minimizing the distance between the CG direction and the direction of the three-term CG methods proposed by Narushima et al. Under some mild conditions, we establish global convergence of the new method for general functions when the standard Wolfe line search is used. Numerical experiments on some test problems from the CUTEst collection are given to show the efficiency of the proposed method.
- Published
- 2018
12. Piecewise linear secant approximation via algorithmic piecewise differentiation
- Author
-
Tom Streubel, Andreas Griewank, Manuel Radons, Richard Hasenfelder, and Lutz Lehmann
- Subjects
Control and Optimization ,Euclidean space ,Applied Mathematics ,Bilinear interpolation ,010103 numerical & computational mathematics ,0102 computer and information sciences ,Function (mathematics) ,Lipschitz continuity ,01 natural sciences ,Piecewise linear function ,010201 computation theory & mathematics ,Convergence (routing) ,Piecewise ,Applied mathematics ,Differentiable function ,0101 mathematics ,Software ,Mathematics - Abstract
It is shown how piecewise differentiable functions \(F: R^n → R^m\) that are defined by evaluation programs can be approximated locally by a piecewise linear model based on a pair of sample points x and x. We show that the discrepancy between function and model at any point x is of the bilinear order O(||x − x|| ||x − x||). This is a little surprising since x ∈ R^n may vary over the whole Euclidean space, and we utilize only two function samples F = F(x) and F = F(x), as well as the intermediates computed during their evaluation. As an application of the piecewise linearization procedure we devise a generalized Newton’s method based on successive piecewise linearization and prove for it sufficient conditions for convergence and convergence rates equaling those of semismooth Newton. We conclude with the derivation of formulas for the numerically stable implementation of the aforedeveloped piecewise linearization methods.
- Published
- 2017
13. Split common fixed point and null point problems for demicontractive operators in Hilbert spaces
- Author
-
Suthep Suantai and Pachara Jailoka
- Subjects
Discrete mathematics ,Control and Optimization ,Applied Mathematics ,010102 general mathematics ,Null (mathematics) ,Hilbert space ,01 natural sciences ,010101 applied mathematics ,symbols.namesake ,Monotone polygon ,Fixed point problem ,Convergence (routing) ,symbols ,Common fixed point ,Null point ,Applied mathematics ,Equilibrium problem ,0101 mathematics ,Software ,Mathematics - Abstract
In this article, we consider a split common fixed point and null point problem which includes the split common fixed point problem, the split common null problem and other problems related to the fixed point problem and the null point problem. We introduce an algorithm for studying the split common fixed point and null problem for demicontractive operators and maximal monotone operators in real Hilbert spaces. We establish a strong convergence result under some suitable conditions and reduce our main result to above-mentioned problems. Moreover, we also apply our main results to the split equilibrium problem. Finally, we give numerical results to demonstrate the convergence of our algorithms.
- Published
- 2017
14. A globally convergent BFGS method for pseudo-monotone variational inequality problems
- Author
-
Fatemeh Shakeri and Fatemeh Abdi
- Subjects
Mathematical optimization ,Control and Optimization ,Basis (linear algebra) ,020209 energy ,Applied Mathematics ,Solution set ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Monotone polygon ,Iterated function ,Broyden–Fletcher–Goldfarb–Shanno algorithm ,Variational inequality ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Quasi-Newton method ,0210 nano-technology ,Software ,Mathematics - Abstract
In this paper, we propose a globally convergent BFGS method to solve Variational Inequality Problems (VIPs). In fact, a globalization technique on the basis of the hyperplane projection method is applied to the BFGS method. The technique, which is independent of any merit function, is applicable for pseudo-monotone problems. The proposed method applies the BFGS direction and tries to reduce the distance of iterates to the solution set. This property, called Fejer monotonicity of iterates with respect to the solution set, is the basis of the convergence analysis. The method applied to pseudo-monotone VIP is globally convergent in the sense that subproblems always have unique solutions, and the sequence of iterates converges to a solution to the problem without any regularity assumption. Finally, some numerical simulations are included to evaluate the efficiency of the proposed algorithm.
- Published
- 2017
15. On solving the minimization problem and the fixed-point problem for a finite family of non-expansive mappings in CAT(0) spaces
- Author
-
Asawathep Cuntavepanit and Withun Phuengrattana
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,010102 general mathematics ,Minimization problem ,01 natural sciences ,010101 applied mathematics ,Proximal point ,Fixed point problem ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Coincidence point ,Expansive ,Software ,Mathematics - Abstract
In this paper, we propose a new modified proximal point algorithm for a finite family of non-expansive mappings in the framework of CAT(0) spaces. We establish Δ-convergence and strong convergence theorems under some mild conditions. Our results extend some known results which appeared in the literature.
- Published
- 2017
16. Symbolic elimination in dynamic optimization based on block-triangular ordering
- Author
-
Fredrik Magnusson and Johan Åkesson
- Subjects
Mathematical optimization ,Modelica ,Control and Optimization ,Optimization problem ,Discretization ,010103 numerical & computational mathematics ,02 engineering and technology ,sparsity preservation ,differential-algebraic equations ,01 natural sciences ,Nonlinear programming ,020401 chemical engineering ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Convergence (routing) ,dynamic optimization ,nonlinear programming ,Initial value problem ,Computer Science::Symbolic Computation ,0204 chemical engineering ,0101 mathematics ,tearing ,Mathematics ,block-triangular ordering ,Applied Mathematics ,Computational mathematics ,Optimal control ,Computational Mathematics ,Algorithm ,Differential algebraic equation ,Software - Abstract
We consider dynamic optimization problems for systems described by differential-algebraic equations (DAEs). Such problems are usually solved by discretizing the full DAE. We propose techniques to symbolically eliminate many of the algebraic variables in a preprocessing step before discretization. These techniques are inspired by the causalization and tearing techniques often used when solving DAE initial value problems. Since sparsity is crucial for some dynamic optimization methods, we also propose a novel approach to preserving sparsity during this procedure.The proposed methods have been implemented in the open-source JModelica.org platform. We evaluate the performance of the methods on a suite of optimal control problems solved using direct collocation.We consider both computational time and probability of solving the problem in a timely manner. We demonstrate that the proposed methods often are an order of magnitude faster than the standard way of discretizing the full DAE, and also significantly increase probability of successful convergence. (Less)
- Published
- 2017
17. An efficient modified Polak–Ribière–Polyak conjugate gradient method with global convergence properties
- Author
-
Zabidin Salleh, Mohd Rivaie, Mustafa Mamat, and Ahmad Alhawarat
- Subjects
Mathematical optimization ,Control and Optimization ,Line search ,Applied Mathematics ,010103 numerical & computational mathematics ,01 natural sciences ,010101 applied mathematics ,Set (abstract data type) ,Nonlinear conjugate gradient method ,Conjugate gradient method ,Bounded function ,Convergence (routing) ,Applied mathematics ,0101 mathematics ,Gradient descent ,Gradient method ,Software ,Mathematics - Abstract
The conjugate gradient (CG) method is one of the most popular methods for solving large-scale unconstrained optimization problems. In this paper, a new modified version of the CG formula that was introduced by Polak, Ribiere, and Polyak is proposed for problems that are bounded below and have a Lipschitz-continuous gradient. The new parameter provides global convergence properties when the strong Wolfe-Powell (SWP) line search or the weak Wolfe-Powell (WWP) line search is employed. A proof of a sufficient descent condition is provided for the SWP line search. Numerical comparisons between the proposed parameter and other recent CG modifications are made on a set of standard unconstrained optimization problems. The numerical results demonstrate the efficiency of the proposed CG parameter compared with the other CG parameters.
- Published
- 2016
18. A feasible method for sensor network localization
- Author
-
Sanyang Liu and Xiaokai Chang
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Orthogonality (programming) ,Applied Mathematics ,0211 other engineering and technologies ,Cayley transform ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Nonlinear programming ,Nonlinear conjugate gradient method ,Search algorithm ,Convergence (routing) ,0101 mathematics ,Gradient descent ,Wireless sensor network ,Software ,Mathematics - Abstract
A nonlinear programming (NLP) model with partial orthogonality constraints, relaxed from the polynomial optimization problem, is proposed and analysed for solving sensor network localization. The NLP model is difficult to solve as the orthogonality constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with this difficulty, we apply the Cayley transform (a Crank–Nicolson-like update scheme) to preserve it. Combining with the gradient descent method, we develop a curvilinear search algorithm, and analyse its convergence. In practice, we accelerate our method by taking nonlinear conjugate gradient method and Barzilai–Borwein steps. Numerical experiments are given to demonstrate the efficiency of the proposed method, for the problems with the number of sensors up to 15,000 and the distance constraints up to 135,817.
- Published
- 2016
19. On solving the minimization problem and the fixed-point problem for nonexpansive mappings in CAT(0) spaces
- Author
-
Raweerote Suparatulatorn, Suthep Suantai, and Prasit Cholamjiak
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,010102 general mathematics ,Minimization problem ,01 natural sciences ,010101 applied mathematics ,Proximal point ,Fixed point problem ,Convergence (routing) ,0101 mathematics ,Software ,Iteration process ,Mathematics - Abstract
We propose a new modified proximal point algorithm combined with Halpern's iteration process for nonexpansive mappings in the framework of CAT0 spaces. We establish a strong convergence theorem under some mild conditions. Our results extend some known results which appeared in the literature.
- Published
- 2016
20. Periodicity in optimal hierarchical checkpointing schemes for adjoint computations
- Author
-
Guillaume Aupy, Julien Herrmann, Topology-Aware System-Scale Data Management for High-Performance Computing (TADAAM), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Vanderbilt University [Nashville], Optimisation des ressources : modèles, algorithmes et ordonnancement (ROMA), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de l'Informatique du Parallélisme (LIP), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), Georgia Institute of Technology [Atlanta], Argonne National Laboratory, JLESC - Joint Laboratory for Extreme Scale Computing, Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Centre National de la Recherche Scientifique (CNRS)-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École normale supérieure - Lyon (ENS Lyon)-Centre National de la Recherche Scientifique (CNRS)-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École normale supérieure - Lyon (ENS Lyon), École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), and Université de Lyon-Université de Lyon-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL)
- Subjects
Optimal algorithms ,Mathematical optimization ,Control and Optimization ,Automatic differentiation ,Computation ,Program reversal ,ACM: F.: Theory of Computation/F.2: ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Quadratic equation ,ACM: F.: Theory of Computation/F.2: ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY/F.2.1: Numerical Algorithms and Problems ,Convergence (routing) ,Applied mathematics ,Online ,0101 mathematics ,Adjoint computation ,[INFO.INFO-MS]Computer Science [cs]/Mathematical Software [cs.MS] ,Mathematics ,021103 operations research ,Conjecture ,ACM: G.: Mathematics of Computing/G.1: NUMERICAL ANALYSIS/G.1.4: Quadrature and Numerical Differentiation/G.1.4.1: Automatic differentiation ,Applied Mathematics ,Checkpointing ,Asymptotic optimality ,[INFO.INFO-NA]Computer Science [cs]/Numerical Analysis [cs.NA] ,Asymptotically optimal algorithm ,Adjoint equation ,Constant (mathematics) ,Software - Abstract
International audience; We reexamine the work of Aupy et al. on optimal algorithms for hierarchical adjoint computations , where two levels of memories are available. The previous optimal algorithm had a quadratic execution time. Here, with structural arguments, namely periodicity, on the optimal solution, we provide an optimal algorithm in constant time and space, with appropriate pre-processing. We also provide an asymptotically optimal algorithm for the online problem, when the adjoint chain size is not known beforehand. Again, these algorithms rely on the proof that the optimal solution for hierarchical adjoint computations is weakly periodic. We conjecture a closed-form formula for the period. Finally, we assess the convergence speed of the approximation ratio for the online problem through simulations.
- Published
- 2016
21. The higher-order Levenberg–Marquardt method with Armijo type line search for nonlinear equations
- Author
-
Cuizhen Du, Liang Chen, and Yanfang Ma
- Subjects
Trust region ,021103 operations research ,Control and Optimization ,Line search ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Local convergence ,Levenberg–Marquardt algorithm ,Nonlinear system ,Rate of convergence ,Convergence (routing) ,0101 mathematics ,Descent direction ,Algorithm ,Software ,Mathematics - Abstract
To save more Jacobian calculations and achieve a faster convergence rate, Yang [A higher-order Levenberg-Marquardt method for nonlinear equations, Appl. Math. Comput. 219222013, pp. 10682–10694, doi:10.1016/j.amc.2013.04.033, 65H10] proposed a higher-order Levenberg–Marquardt LM method by computing the LM step and another two approximate LM steps for nonlinear equations. Under the local error bound condition, global and local convergence of this method is proved by using trust region technique. However, it is clear that the last two approximate LM steps may be not necessarily a descent direction, and standard line search technique cannot be used directly to obtain the convergence properties of this higher-order LM method. Hence, in this paper, we employ the nonmonotone second-order Armijo line search proposed by Zhou [On the convergence of the modified Levenberg-Marquardt method with a nonmonotone second order Armijo type line search, J. Comput. Appl. Math. 239 2013, pp. 152–161] to guarantee the global convergence of this higher-order LM method. Moreover, the local convergence is also preserved under the local error bound condition. Numerical results show that the new method is efficient.
- Published
- 2016
22. Accelerating techniques on nested decomposition
- Author
-
Georgios K. D. Saharidis and George Kolomvos
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,CPU time ,Sampling (statistics) ,Context (language use) ,02 engineering and technology ,01 natural sciences ,Upper and lower bounds ,Stochastic programming ,010104 statistics & probability ,Tree (data structure) ,Convergence (routing) ,Decomposition method (constraint satisfaction) ,0101 mathematics ,Algorithm ,Software ,Mathematics - Abstract
In this paper, we consider the nested decomposition method in the context of multistage stochastic problems with scenario trees where we contribute with improvements in accelerating convergence to optimum. We first propose a hybrid protocol for transmitting cuts across nodes aiming at a trade-off between the inherent benefits and drawbacks of the single aggregated versus multi-cut desegregated versions. The idea we aim to explore is to reduce the size of the problem solved at each iteration by applying the aggregated version of the cuts on the furthest nodes of the tree. We then turn our attention towards the selection of first-stage solutions by employing sampling and simulation aiming to carefully choose those first-stage solution that are potentially capable to drastically decrease the gap between the upper and lower bound at each iteration. The problems on which the latter technique is applied present a special structure, according to which the first-stage variables are linked to all further stages. We test both accelerating techniques on a generic problem which displays a blend of two-stage and multistage structure and we report significant savings in terms of CPU time and number of iterations to convergence.
- Published
- 2016
23. On the convergence of the forward–backward splitting method with linesearches
- Author
-
Tran T. A. Nghia and Jose Yunier Bello Cruz
- Subjects
021103 operations research ,Control and Optimization ,Weak convergence ,Applied Mathematics ,010102 general mathematics ,Mathematical analysis ,0211 other engineering and technologies ,Hilbert space ,Fréchet derivative ,02 engineering and technology ,Lipschitz continuity ,01 natural sciences ,symbols.namesake ,Iterated function ,Bounded function ,Convergence (routing) ,symbols ,0101 mathematics ,Convex function ,Software ,Mathematics - Abstract
In this paper we focus on the convergence analysis of the forward–backward splitting method for solving nonsmooth optimization problems in Hilbert spaces when the objective function is the sum of two convex functions. Assuming that one of the functions is Frechet differentiable and using two new linesearches, the weak convergence is established without any Lipschitz continuity assumption on the gradient. Furthermore, we obtain many complexity results of cost values at the iterates when the stepsizes are bounded below by a positive constant.
- Published
- 2016
24. A penalty PALM method for sparse portfolio selection problems
- Author
-
Yue Teng, Li Yang, Bo Yu, and Xiaoliang Song
- Subjects
Sequence ,Mathematical optimization ,050208 finance ,021103 operations research ,Control and Optimization ,Applied Mathematics ,05 social sciences ,0211 other engineering and technologies ,Structure (category theory) ,02 engineering and technology ,0502 economics and business ,Convergence (routing) ,Point (geometry) ,Penalty method ,Minification ,Projection (set theory) ,Software ,Selection (genetic algorithm) ,Mathematics - Abstract
In this paper, we propose a penalty proximal alternating linearized minimization method for the large-scale sparse portfolio problems in which a sequence of penalty subproblems are solved by utilizing the proximal alternating linearized minimization framework and sparse projection techniques. For exploiting the structure of the problems and reducing the computation complexity, each penalty subproblem is solved by alternately solving two projection problems. The global convergence of the method to a Karush-Kuhn-Tucker point or a local minimizer of the problem can be proved under the characteristic of the problem. The computational results with practical problems demonstrate that our method can find the suboptimal solutions of the problems efficiently and is competitive with some other local solution methods.
- Published
- 2016
25. A proximal partially parallel splitting method for separable convex programs
- Author
-
Jitamitra Desai, Hongjin He, and Kai Wang
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Augmented Lagrangian method ,Applied Mathematics ,0211 other engineering and technologies ,Regular polygon ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Separable space ,Constraint (information theory) ,Set (abstract data type) ,Rate of convergence ,Convex optimization ,Convergence (routing) ,0101 mathematics ,Software ,Mathematics - Abstract
In this paper, we propose a proximal partially parallel splitting method for solving convex minimization problems, where the objective function is separable into m individual operators without any coupled variables, and the structural constraint set comprises only linear functions. At each iteration of this algorithm, one selected subproblem is solved, and subsequently the remaining subproblems are solved in parallel, utilizing the new iterate information. Hence, the proposed method is a hybrid mechanism that combines the nice features of parallel decomposition methods and alternating direction methods, while simultaneously adopting the predictor–corrector strategy to ensure convergence of the algorithm. Our algorithmic framework is also amenable to admitting linearized versions of the subproblems, which frequently have closed-form solutions, thereby making the proposed method more implementable in practice. Furthermore, the worst-case convergence rate of the proposed method is obtained under both ergodic...
- Published
- 2016
26. Optimal design for 2D wave equations
- Author
-
Miguel Cea
- Subjects
Optimal design ,Control and Optimization ,Applied Mathematics ,010102 general mathematics ,Mathematical analysis ,Optimal control ,01 natural sciences ,Finite element method ,010101 applied mathematics ,symbols.namesake ,Adjoint equation ,Dirichlet boundary condition ,Convergence (routing) ,symbols ,Shape optimization ,Topological derivative ,0101 mathematics ,Software ,Mathematics - Abstract
In this paper We consider a problem of optimal design in 2D for the wave equation with Dirichlet boundary conditions. We introduce a finite element discrete version of this problem in which the domains under consideration are polygons defined on the numerical mesh. We prove that, as the mesh size tends to zero, any limit, in the sense of the complementary-Hausdorff convergence, of discrete optimal shapes is an optimal domain for the continuous optimal design problem. We work in the functional and geometric setting introduced by V. Sverak in which the domains under consideration are assumed to have an a priori limited number of holes. We present in detail a numerical algorithm and show the efficiency of the method through various numerical experiments.
- Published
- 2016
27. Globally convergent homotopy algorithm for solving the KKT systems to the principal-agent bilevel programming
- Author
-
Zhichuan Zhu and Bo Yu
- Subjects
Mathematical optimization ,Control and Optimization ,Karush–Kuhn–Tucker conditions ,Applied Mathematics ,Homotopy ,05 social sciences ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,Function (mathematics) ,01 natural sciences ,Bilevel optimization ,Piecewise linear function ,0502 economics and business ,Convergence (routing) ,Piecewise ,0101 mathematics ,Software ,Homotopy analysis method ,050205 econometrics ,Mathematics - Abstract
In this paper, a constraint set swelling homotopy CSSH algorithm for solving the single-level non-convex programming problem with designing piecewise linear contractual function which is equivalent to the principal-agent model with integral operator is proposed, and the existence and global convergence is proven under some mild conditions. As a comparison, a piecewise constant contract is also designed for solving the single-level non-convex programming problem with the corresponding discrete distributions. And some numerical tests are done by the proposed homotopy algorithm as well as by using fmincon in Matlab, LOQO and MINOS. The numerical results show that the CSSH algorithm is robust, feasible and effective.
- Published
- 2016
28. Diagonal quasi-Newton method via variational principle under generalized Frobenius norm
- Author
-
Wah June Leong, Mahboubeh Farid, and Sharareh Enshaei
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,Diagonal ,Mathematical analysis ,0211 other engineering and technologies ,Matrix norm ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Standard line ,Variational principle ,Conjugate gradient method ,Convergence (routing) ,Standard test ,Quasi-Newton method ,0101 mathematics ,Software ,Mathematics - Abstract
In this work, we present a new class of diagonal quasi-Newton methods for solving large-scale unconstrained optimization problems. The methods are derived by means of variational principle under the generalized Frobenius norm. We show global convergence of our methods under the standard line search with Armijo condition. Numerical results are carried out in standard test problems and clearly indicate vast superiority over some classical conjugate gradient methods.
- Published
- 2016
29. Incremental subgradient method for nonsmooth convex optimization with fixed point constraints
- Author
-
Hideaki Iiduka
- Subjects
Sequence ,Mathematical optimization ,021103 operations research ,Control and Optimization ,Intersection (set theory) ,Applied Mathematics ,0211 other engineering and technologies ,020206 networking & telecommunications ,02 engineering and technology ,Subderivative ,Fixed point ,Projection (linear algebra) ,Convex optimization ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Subgradient method ,Software ,Mathematics - Abstract
This paper proposes an incremental subgradient method for solving the problem of minimizing the sum of nondifferentiable, convex objective functions over the intersection of fixed point sets of nonexpansive mappings in a real Hilbert space. The proposed algorithm can work in nonsmooth optimization over constraint sets onto which projections cannot be always implemented, whereas the conventional incremental subgradient method can be applied only when a constraint set is simple in the sense that the projection onto it can be easily implemented. We first study its convergence for a constant step size. The analysis indicates that there is a possibility that the algorithm with a small constant step size approximates a solution to the problem. Next, we study its convergence for a diminishing step size and show that there exists a subsequence of the sequence generated by the algorithm which weakly converges to a solution to the problem. Moreover, we show the whole sequence generated by the algorithm with a diminishing step size strongly converges to the solution to the problem under certain assumptions. We also give examples of real applied problems which satisfy the assumptions in the convergence theorems and numerical examples to support the convergence analyses.
- Published
- 2016
30. On the differentiability check in gradient sampling methods
- Author
-
Lucas E. A. Simões, Sandra A. Santos, and Elias S. Helou
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Check-in ,Applied Mathematics ,Minimization problem ,Control (management) ,0211 other engineering and technologies ,Sampling (statistics) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Perspective (geometry) ,OTIMIZAÇÃO COMBINATÓRIA ,Convergence (routing) ,Calculus ,Differentiable function ,0101 mathematics ,Software ,Mathematics - Abstract
The present study aims to carefully discuss the importance of differentiability checks during the execution of methods based on gradient sampling. We stress the significance of this procedure not only from the theoretical perspective, but also in the practical implementation. We support our claims exhibiting illustrative examples where the absence of the differentiability check in the method prevents the achievement of the minimization problem solution. As possible alternatives, this manuscript presents two procedures that suppress the differentiability check without affecting the convergence of the method both in theory and in practice. Lastly, by solving a difficult control problem, we show that besides the theoretical appeal our changes may also be useful to address real problems.
- Published
- 2016
31. Solving linear generalized Nash equilibrium problems numerically
- Author
-
Axel Dreves and Nathan Sudermann-Merx
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Linear programming ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Reduction (complexity) ,Nonlinear system ,symbols.namesake ,Nash equilibrium ,Convergence (routing) ,symbols ,Penalty method ,0101 mathematics ,Subgradient method ,Software ,Mathematics - Abstract
This paper considers the numerical solution of linear generalized Nash equilibrium problems LGNEPs. Since many methods for nonlinear problems require the nonsingularity of some second-order derivative, standard convergence conditions are not satisfied in our linear case. We provide new convergence criteria for a potential reduction algorithm PRA that allow its application to LGNEPs. Furthermore, we discuss a projected subgradient method PSM and a penalty method that exploit some known Nikaido–Isoda function-based constrained and unconstrained optimization reformulations of the LGNEP. Moreover, it is shown that normalized Nash equilibria of an LGNEP can be obtained by solving a single linear program. All proposed algorithms are tested on randomly generated instances of economic market models that are introduced and analysed in this paper and that lead to LGNEPs with shared and with non-shared constraints. It is shown that these problems have some favourable properties that can be exploited to obtain their solutions. With the PRA and in particular with the PSM we are able to compute solutions with satisfying precision even for problems with up 10,000 variables.
- Published
- 2016
32. An exchange method with refined subproblems for convex semi-infinite programming problems
- Author
-
Kensuke Gomoto, Nobuo Yamashita, Shunsuke Hayashi, and Takayuki Okuno
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Sequence ,021103 operations research ,Control and Optimization ,Optimization problem ,Overlapping subproblems ,Applied Mathematics ,0211 other engineering and technologies ,Regular polygon ,Optimal substructure ,02 engineering and technology ,Semi-infinite programming ,020901 industrial engineering & automation ,Quadratic equation ,Convergence (routing) ,Software ,Mathematics - Abstract
In this paper, we propose a new exchange method for solving convex semi-infinite programming problems SIPs. The traditional exchange method solves a sequence of finitely relaxed subproblems, that is, subproblems with finitely many constraints chosen from the original constraints. On the other hand, our exchange method solves a sequence of new subproblems, in which the traditional finite subproblems are refined by the quadratic approximation. Under mild assumptions, the refined subproblems approximate the original SIP more precisely than the traditional subproblems. Moreover, although those subproblems are still SIPs, they can be solved efficiently by reformulating them as certain optimization problems with finitely many constraints. We establish the global convergence property of the proposed algorithm. Finally, we examine the efficiency of the algorithm by some numerical experiments.
- Published
- 2016
33. Sequential threshold control in descent splitting methods for decomposable optimization problems
- Author
-
Igor Konnov
- Subjects
Mathematical optimization ,Control and Optimization ,Optimization problem ,Composite optimization ,Applied Mathematics ,Computation ,Convergence (routing) ,Control (linguistics) ,Software ,Descent (mathematics) ,Mathematics - Abstract
We suggest a modification of the descent splitting methods for decomposable composite optimization problems, which maintains the basic convergence properties, but enables one to reduce the computational expenses per iteration and to provide computations in a distributed manner. It consists of making coordinate-wise steps together with a special threshold control.
- Published
- 2015
34. Analytic centre stabilization of column generation algorithm for the capacitated vehicle routing problem
- Author
-
Abbas Seifi and Hadi Karimi
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Column (database) ,symbols.namesake ,Simplex algorithm ,Lagrangian relaxation ,Convergence (routing) ,Vehicle routing problem ,symbols ,Dantzig–Wolfe decomposition ,Column generation ,Software ,Cutting-plane method ,Mathematics - Abstract
We investigate various stabilization strategies for improving the column generation algorithm frequently used in recent exact methods and propose a novel stabilization technique specialized for the capacitated vehicle routing problem CVRP. We start with exploring the relationship between Lagrangian relaxation and Dantzig–Wolfe DW decomposition for CVRP. The proposed technique, called analytic centre stabilization, is based on the analytic centre cutting plane method. In particular, we generate a cutting plane in the Lagrangian dual space and use it as a new column within DW decomposition for CVRP. In computational experiments, we compare the performance of standard column generation based on the Simplex method with those of the analytic centre stabilization, the bundle method, and a hybrid of the two approaches. The results show that the proposed method is quite competitive and the hybrid method has the best convergence properties among others.
- Published
- 2015
35. On the rate of convergence of projected Barzilai–Borwein methods
- Author
-
Hongwei Liu and Yakui Huang
- Subjects
Mathematical optimization ,Control and Optimization ,Line search ,Rate of convergence ,Applied Mathematics ,Convergence (routing) ,Constrained optimization ,Proper convex function ,Convergence tests ,Convex function ,Software ,Compact convergence ,Mathematics - Abstract
We study the rate of convergence of a projected Barzilai–Borwein method, which performs the Grippo–Lampariello–Lucidi GLL non-monotone line search along the feasible direction, for convex constrained optimization. Under mild conditions, we establish sublinear convergence of the considered method for general convex functions. Moreover, we show that the rate of convergence is R-linear for strongly convex functions. We extend the convergence results to different versions of the method based on a general GLL non-monotone line search, an adaptive non-monotone line search, and a non-monotone line search that requires an average of the successive function values decreases.
- Published
- 2015
36. A smoothing majorization method for matrix minimization
- Author
-
Liwei Zhang, Yue Lu, and Jia Wu
- Subjects
Matrix (mathematics) ,Mathematical optimization ,Control and Optimization ,Matrix completion ,Optimization problem ,Applied Mathematics ,Convergence (routing) ,Limit point ,Minification ,Majorization ,Software ,Smoothing ,Mathematics - Abstract
In this paper, we consider the with p∈0, 1 matrix minimization for recovering the low-rank matrices. A smoothing approach for solving this non-smooth, non-Lipschitz and non-convex optimization problem is developed, in which the smoothing parameter is treated as a decision variable and a majorization method is adopted to solve the smoothing problem. The convergence theorem shows that any accumulation point of the sequence generated by the proposed approach satisfies the first-order necessary optimality condition of the problem. As an application, we use the proposed smoothing majorization method to solve the famous matrix completion problems. Numerical results indicate that our algorithm can solve the test problems efficiently.
- Published
- 2014
37. A constraint shifting homotopy method for finding a minimal efficient solution of nonconvex multiobjective programming
- Author
-
Huijuan Xiong and Zhichuan Zhu
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Homotopy ,Feasible region ,MathematicsofComputing_NUMERICALANALYSIS ,Boundary (topology) ,Constraint satisfaction ,Constraint (information theory) ,Set (abstract data type) ,Convergence (routing) ,Software ,Interior point method ,Mathematics - Abstract
In this paper, for finding a minimal efficient solution of nonconvex multiobjective programming, a constraint shifting combined homotopy is constructed and the global convergence is obtained under some mild conditions. The method requires that the initial point needs to be only in the shifted feasible set not necessarily in the original feasible set, and the normal cone condition need only be satisfied in the boundary of the shifted feasible set not the original constraint set. Some numerical tests are done and made comparison with a combined homotopy interior point method. The results show that our method is feasible and effective.
- Published
- 2014
38. A modified limited-memory BNS method for unconstrained minimization based on the conjugate directions idea
- Author
-
Jan Vlček and Ladislav Lukšan
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Quadratic equation ,Broyden–Fletcher–Goldfarb–Shanno algorithm ,Metric (mathematics) ,Convergence (routing) ,Quasi-Newton method ,Conjugate residual method ,Applied mathematics ,Minification ,Software ,Variable (mathematics) ,Mathematics - Abstract
A modification of the limited-memory variable metric BNS method for large-scale unconstrained optimization is proposed, which consists in corrections derived from the idea of conjugate directions of the used difference vectors for better satisfaction of the previous quasi-Newton QN conditions. In comparison with [Vlcek and Luksan, A conjugate directions approach to improve the limited-memory BFGS method, Appl. Math. Comput. 219 2012, pp. 800–809], where a similar approach is used, correction vectors from more previous iterations can be applied here. For quadratic objective functions, the improvement of convergence is the best one in some sense, all stored corrected difference vectors are conjugate and the QN conditions with these vectors are satisfied. Global convergence of the algorithm is established for convex sufficiently smooth functions. Numerical experiments demonstrate the efficiency of the new method.
- Published
- 2014
39. An augmented Lagrangian trust region method for equality constrained optimization
- Author
-
Ya-xiang Yuan and Xiao Wang
- Subjects
Mathematical optimization ,Trust region ,Control and Optimization ,Augmented Lagrangian method ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Constrained optimization ,symbols.namesake ,CUTEr ,Lagrangian relaxation ,Lagrange multiplier ,Convergence (routing) ,symbols ,Penalty method ,Software ,Mathematics - Abstract
In this paper we propose an augmented Lagrangian trust region method for equality constrained optimization. Different from standard augmented Lagrangian methods which minimize the augmented Lagrangian function for fixed Lagrange multiplier and penalty parameter at each iteration, the proposed method tries to minimize its second-order approximation function. We propose a new strategy for adjusting the penalty parameter. With adaptive update of Lagrange multipliers, we prove the global convergence of the proposed method. Numerical results on test problems from the CUTEr collection are also reported.
- Published
- 2014
40. On the modified trust region algorithm for nonlinear equations
- Author
-
Na Lu and Jinyan Fan
- Subjects
Trust region ,Control and Optimization ,Applied Mathematics ,Computation ,Mathematical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Zero (complex analysis) ,law.invention ,Singular problems ,symbols.namesake ,Nonlinear system ,Mathematics::Algebraic Geometry ,Invertible matrix ,law ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Jacobian matrix and determinant ,Convergence (routing) ,symbols ,Software ,Mathematics - Abstract
In this paper, we present a modified trust region algorithm for nonlinear equations with the trust region radii converging to zero. The algorithm calculates the Jacobian after every two computations of the step. It preserves the global convergence as the traditional trust region algorithms. Moreover, it converges nearly q-cubically under the local error bound condition, which is weaker than the nonsingularity of the Jacobian at a solution. Numerical results show that the algorithm is very efficient for both singular problems and nonsingular problems.
- Published
- 2014
41. Twelve monotonicity conditions arising from algorithms for equilibrium problems
- Author
-
Giancarlo Bigi, Mauro Passacantando, Bigi, G, and Passacantando, M
- Subjects
Control and Optimization ,Applied Mathematics ,Monotonic function ,Computer Science::Computational Complexity ,Computer Science::Computational Geometry ,Equilibrium problems ,monotonicity ,variational inequalities ,Nash equilibria ,symbols.namesake ,Quadratic equation ,Nash equilibrium ,Convergence (routing) ,Variational inequality ,symbols ,variational inequalitie ,Computer Science::Data Structures and Algorithms ,Mathematical economics ,equilibrium problem ,Software ,Mathematics - Abstract
In the last years many solution methods for equilibrium problems (EPs) have been developed. Several different monotonicity conditions have been exploited to prove convergence. The paper investigates all the relationships between them in the framework of the so-called abstract EP. The analysis is further detailed for variational inequalities and linear EPs, which include also Nash EPs with quadratic payoffs.
- Published
- 2014
42. Damped techniques for enforcing convergence of quasi-Newton methods
- Author
-
Mehiddin Al-Baali
- Subjects
Mathematical optimization ,Control and Optimization ,Property (programming) ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Unconstrained optimization ,Broyden's method ,Nonlinear system ,Positive definiteness ,Broyden–Fletcher–Goldfarb–Shanno algorithm ,Convergence (routing) ,Convex function ,Software ,Mathematics - Abstract
This paper extends the technique used in the damped BFGS method of Powell [Algorithms for nonlinear constraints that use Lagrange functions, Math. Program. 14 (1978), 224–248] to the Broyden family of quasi-Newton methods with applications to unconstrained optimization problems. Appropriate conditions on the damped technique are proposed to enforce safely the positive definiteness property for all Broyden's updates. It is shown that this technique maintains the q-superlinear convergence property of the restricted Broyden family of methods for uniformly convex functions. It also extends the global convergence property to all members of the family. Preliminary numerical results are described which show that appropriate ways for employing the proposed technique improve the performance of all members of the Broyden family of methods substantially and significantly in certain cases. They also enforce convergence of divergent quasi-Newton methods.
- Published
- 2014
43. A proximal alternating linearization method for nonconvex optimization problems
- Author
-
Li-Ping Pang, Shuang Chen, and Dan Li
- Subjects
Proximal point ,Mathematical optimization ,Control and Optimization ,Optimization problem ,Linearization ,Applied Mathematics ,Convergence (routing) ,Focus (optics) ,Stationary point ,Software ,Mathematics - Abstract
In this paper, we focus on the problems of minimizing the sum of two nonsmooth functions which are possibly nonconvex. These problems arise in many applications of practical interests. We present a proximal alternating linearization algorithm which alternately generates two approximate proximal points of the original objective function. It is proved that the accumulation points of iterations converge to a stationary point of the problem. Numerical experiments validate the theoretical convergence analysis and verify the implementation of the proposed algorithm.
- Published
- 2013
44. A new nonmonotone trust region method for unconstrained optimization equipped by an efficient adaptive radius
- Author
-
M. Reza Peyghami, H. Mesgarani, and D. Ataee Tarzanagh
- Subjects
Trust region ,Nonlinear system ,Mathematical optimization ,Control and Optimization ,Line search ,Rate of convergence ,Applied Mathematics ,Convergence (routing) ,Unconstrained optimization ,Radius ,New variant ,Software ,Mathematics - Abstract
In this paper, we present a new nonmonotone trust region method with adaptive radius which is equipped by a nonmonotone line search technique for solving unconstrained optimization problems. The proposed method combines a modified version of the Li and Fukushima's nonmonotone technique in D.H. Li and M. Fukushima [A derivative-free line search and global convergence of Broyden like method for nonlinear equations, Optim. Methods Softw.13 2000, pp. 181–201] for solving nonlinear systems with a new variant of Shi and Guo's adaptive strategy in Z.J. Shi and J.H. Guo [A new trust region methods for unconstrained optimization, J. Comput. Appl. Math. 213 2008, pp. 509–520] for updating the trust region radius. The method performs a nonmonotone Armijo-type line search whenever the trial step is rejected. Under some standard assumptions, we provide the global convergence property as well as the superlinear and quadratic convergence rates for the new method. Numerical results show the efficiency and effectiveness of the new proposed method in practice.
- Published
- 2013
45. On the behaviour of constrained optimization methods when Lagrange multipliers do not exist
- Author
-
Roberto Andreani, José Mario Martínez, Benar Fux Svaiter, and Lúcio T. Santos
- Subjects
Continuous optimization ,Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Closeness ,Constrained optimization ,Constraint (information theory) ,symbols.namesake ,Iterated function ,Lagrange multiplier ,Convergence (routing) ,symbols ,Software ,Sequential quadratic programming ,Mathematics - Abstract
Sequential optimality conditions are related to stopping criteria for nonlinear programming algorithms. Local minimizers of continuous optimization problems satisfy these conditions without constraint qualifications. It is interesting to discover whether well-known optimization algorithms generate primal–dual sequences that allow one to detect that a sequential optimality condition holds. When this is the case, the algorithm stops with a ‘correct’ diagnostic of success ‘convergence’. Otherwise, closeness to a minimizer is not detected and the algorithm ignores that a satisfactory solution has been found. In this paper it will be shown that a straightforward version of the Newton–Lagrange sequential quadratic programming method fails to generate iterates for which a sequential optimality condition is satisfied. On the other hand, a Newtonian penalty–barrier Lagrangian method guarantees that the appropriate stopping criterion eventually holds.
- Published
- 2013
46. The combination stretching function technique with simulated annealing algorithm for global optimization
- Author
-
Huiya Dai, Junqi He, and Xueli Song
- Subjects
Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Particle swarm optimization ,Function (mathematics) ,Standard deviation ,Maxima and minima ,Local optimum ,Convergence (routing) ,Simulated annealing ,Global optimization ,Algorithm ,Software ,Mathematics - Abstract
A stretching function technique is combined with the simulated annealing SA algorithm for the global optimization problems. In the presented algorithm, the obtained local optimum information by SA search is used to build a stretching function. To speed up the convergence of SA, SA is executed iteratively on the stretching function constructed with respect to the previously found local minima instead of on the original objective function. Furthermore, a new next trial point generation scheme in SA is designed to increase the diversity of the trial points to make the introduced technique more effective. In the numerical experiments, we first investigate and compare the effectiveness and efficiency of the combination of the stretching technique with a newly designed SA SSA, with particle swarm optimization and with differential evolution algorithm, respectively, on eight typical test functions in terms of the average number of the function evaluations and its standard deviation, and the success rate. Second, we systematically compare the numerical results of SSA with the traditional SA with elitist and SA with new generation scheme on 37 benchmark problems. The numerical results show that the hybrid method is effective for global optimization.
- Published
- 2013
47. An implicit discretization scheme for linear-quadratic control problems with bang–bang solutions
- Author
-
Martin Seydenschwanz and Walter Alt
- Subjects
Set (abstract data type) ,Class (set theory) ,Control and Optimization ,Discretization ,Control theory ,Applied Mathematics ,Convergence (routing) ,Structure (category theory) ,Optimal control ,Bang–bang control ,Measure (mathematics) ,Software ,Mathematics - Abstract
We analyse an implicit discretization scheme for a class of linear-quadratic optimal control problems. First, we show convergence of order Oh for the optimal values of the objective function, where h is the mesh size. Under the additional assumption that the optimal control has bang–bang structure we show that the discrete and the continuous controls coincide except on a set of measure O√h.
- Published
- 2013
48. Feasible Barzilai–Borwein-like methods for extreme symmetric eigenvalue problems
- Author
-
Bo Jiang and Yu-Hong Dai
- Subjects
Set (abstract data type) ,Mathematical optimization ,Control and Optimization ,Line search ,Applied Mathematics ,Convergence (routing) ,Order (group theory) ,Point (geometry) ,Divide-and-conquer eigenvalue algorithm ,Software ,Eigenvalues and eigenvectors ,Mathematics ,Sparse matrix - Abstract
This paper aims to study feasible Barzilai–Borwein BB-like methods for extreme symmetric eigenvalue problems. For the two-dimensional case, we establish the local superlinear convergence result of FLBB, FSBB, FABB, and FADBB algorithms. A counter-example is also given, showing that the algorithms may cycle or stop at a non-stationary point. In order to circumvent the difficulty, we propose a safeguard in choosing the stepsize. We also adopt an adaptive non-monotone line search with an improved line search to ensure the global convergence of AFBB-like methods. Numerical experiments on a set of test problems from UF Sparse Matrix Collection demonstrate that, comparing several available codes including eigs , irbleigs and jdcg , AFBB-like methods are very useful for large-scale sparse extreme symmetric eigenvalue problems.
- Published
- 2013
49. Sampling average approximation method for a class of stochastic Nash equilibrium problems
- Author
-
Zhi-Feng He, Pei-Yu Li, and Gui-Hua Lin
- Subjects
Class (set theory) ,Mathematical optimization ,Control and Optimization ,Applied Mathematics ,Sampling (statistics) ,symbols.namesake ,Sample average approximation ,Nash equilibrium ,Convergence (routing) ,symbols ,Mixed complementarity problem ,Newton's method ,Software ,Mathematics - Abstract
We consider a class of stochastic Nash equilibrium problems SNEP. Under some mild conditions, we reformulate the SNEP as a stochastic mixed complementarity problem SMCP. We apply the well-known sample average approximation SAA method to solve the SMCP. We further introduce a semismooth Newton method to solve the SAA problems. The comprehensive convergence analysis is given as well. In addition, we demonstrate the proposed approach on a stochastic Nash equilibrium model in the wholesale gas–oil markets.
- Published
- 2013
50. A hybrid splitting method for variational inequality problems with separable structure
- Author
-
Yannan Chen, Deren Han, Hongjin He, and Wenyu Sun
- Subjects
Mathematical optimization ,Nonlinear system ,Control and Optimization ,Series (mathematics) ,Applied Mathematics ,Variational inequality ,Convergence (routing) ,Structure (category theory) ,Strongly monotone ,Software ,Mathematics ,Variable (mathematics) ,Separable space - Abstract
Alternating direction method ADM, which decomposes a large-scale original variational inequality VI problem into a series of smaller scale subproblems, is very attractive for solving a class of VI problems with a separable structure. This type of method can greatly improve the efficiency, but cannot avoid solving VI subproblems. In this paper, we propose a hybrid splitting method with variable parameters for separable VI problems. Specifically, the proposed method solves only one strongly monotone VI subproblem and a well-posed system of nonlinear equations in each iteration. The global convergence of the new method is established under some standard assumptions as those in classical ADMs. Finally, some preliminary numerical results show that the proposed method performs favourably in practice.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.