77 results on '"Linear optimization"'
Search Results
2. Generating linear, semidefinite, and second-order cone optimization problems for numerical experiments.
- Author
-
Mohammadisiahroudi, Mohammadhossein, Fakhimi, Ramin, Augustino, Brandon, and Terlaky, Tamás
- Subjects
- *
INTERIOR-point methods , *SEMIDEFINITE programming , *ALGORITHMS - Abstract
The numerical performance of algorithms can be studied using test sets or procedures that generate such problems. This paper proposes various methods for generating linear, semidefinite, and second-order cone optimization problems. Specifically, we are interested in problem instances requiring a known optimal solution, a known optimal partition, a specific interior solution, or all these together. In the proposed problem generators, different characteristics of optimization problems, including dimension, size, condition number, degeneracy, optimal partition, and sparsity, can be chosen to facilitate comprehensive computational experiments. We also develop procedures to generate instances with a maximally complementary optimal solution with a predetermined optimal partition to generate challenging semidefinite and second-order cone optimization problems. Generated instances enable us to evaluate efficient interior-point methods for conic optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Efficient Use of Quantum Linear System Algorithms in Inexact Infeasible IPMs for Linear Optimization.
- Author
-
Mohammadisiahroudi, Mohammadhossein, Fakhimi, Ramin, and Terlaky, Tamás
- Subjects
- *
INTERIOR-point methods , *CONJUGATE gradient methods , *QUANTUM computing , *ALGORITHMS , *RESEARCH personnel , *LINEAR systems - Abstract
Quantum computing has attracted significant interest in the optimization community because it potentially can solve classes of optimization problems faster than conventional supercomputers. Several researchers proposed quantum computing methods, especially quantum interior point methods (QIPMs), to solve convex conic optimization problems. Most of them have applied a quantum linear system algorithm at each iteration to compute a Newton step. However, using quantum linear solvers in QIPMs comes with many challenges, such as having ill-conditioned systems and the considerable error of quantum solvers. This paper investigates in detail the use of quantum linear solvers in QIPMs. Accordingly, an Inexact Infeasible Quantum Interior Point (II-QIPM) is developed to solve linear optimization problems. We also discuss how we can get an exact solution by iterative refinement (IR) without excessive time of quantum solvers. The proposed IR-II-QIPM shows exponential speed-up with respect to precision compared to previous II-QIPMs. Additionally, we present a quantum-inspired classical variant of the proposed IR-II-QIPM where QLSAs are replaced by conjugate gradient methods. This classic IR-II-IPM has some advantages compared to its quantum counterpart, as well as previous classic inexact infeasible IPMs. Finally, computational results with a QISKIT implementation of our QIPM using quantum simulators are presented and analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A full-Newton step infeasible interior point algorithm and its parameters analysis.
- Author
-
Zhang, Lipu and Xu, Yinghong
- Subjects
INTERIOR decoration ,ALGORITHMS - Abstract
This article utilizes a simple univariate function to formulate the search direction for the full-Newton step and constructs a metric to estimate the distance between the iterative point and the central path. By doing so, it designs an infeasible interior point algorithm for linear optimization problems that employs only one kind of search step. As demonstrated in the complexity analysis, this straightforward univariate function proves to be highly beneficial in analyzing the algorithm. Additionally, we discuss the update parameters and threshold parameters of the algorithm, present three methods for parameter updates, and compare the efficiency of these three methods using numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Neighborhood persistency of the linear optimization relaxation of integer linear optimization.
- Author
-
Kimura, Kei and Nakayama, Kotaro
- Subjects
- *
LINEAR programming , *INTEGERS , *NEIGHBORHOODS , *ALGORITHMS , *HEURISTIC - Abstract
For an integer linear optimization (ILO) problem, persistency of its linear optimization (LO) relaxation is a property that for every optimal solution of the relaxation that assigns integer values to some variables, there exists an optimal solution of the ILO problem in which these variables retain the same values. Although persistency has been used to develop heuristic, approximation, and fixed-parameter algorithms for special cases of ILO, its applicability remains unknown in the literature. In this paper, we reveal a maximal subclass of ILO such that its LO relaxation has persistency. Specifically, we show that the LO relaxation of ILO on unit-two-variable-per-inequality (UTVPI) systems has persistency and is (in a certain sense) maximal among such ILO. Our result generalizes the persistency results by Nemhauser and Trotter, Hochbaum et al., and Fiorini et al. Even more, we propose a stronger property called
neighborhood persistency and show that the LO relaxation of ILO on UTVPI systems in general has this property. Using this stronger result, we obtain a fixed-parameter algorithm (where the parameter is the optimal value) and another proof of 2-approximability for ILO on UTVPI systems where objective functions and variables are non-negative. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
6. Fast Security-Constrained Optimal Power Flow Through Low-Impact and Redundancy Screening.
- Author
-
Weinhold, Richard and Mieth, Robert
- Subjects
- *
REDUNDANCY in engineering , *ELECTRICITY markets , *ALGORITHMS , *COMPUTER network security , *CONSTRAINT satisfaction , *MARKETING research , *SCALABILITY - Abstract
Determining contingency aware dispatch decisions by solving a security-constrained optimal power flow (SCOPF) is challenging for real-world power systems, as the high problem dimensionality often leads to impractical computational requirements. This problem becomes more severe when the SCOPF has to be solved not only for a single instance, but for multiple periods, e.g. in the context of electricity market analyses. This paper proposes an algorithm that identifies the minimal set of constraints that exactly define the space of feasible nodal injections for a given network and contingency scenarios. By internalizing the technical limits of the nodal injections and enforcing a minimal worst-case impact of contingencies to line flows, computational effort can be further improved. The case study applies and analyzes the methods on the IEEE 118 and A&M 2000 bus systems, as well as the German and European transmission systems. In all tested cases the proposed algorithm identifies at least 95% of the network and security constraints as redundant, leading to significant SCOPF solve time reductions. Scalability and practical implementation are explicitly discussed. The code and input data of the case study is published supplementary to the paper under an open-source license. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A full-Newton step infeasible interior-point method based on a trigonometric kernel function without centering steps.
- Author
-
Kheirfam, Behrouz and Haghighi, Masoumeh
- Subjects
- *
INTERIOR-point methods , *KERNEL functions , *TRIGONOMETRIC functions , *ALGORITHMS - Abstract
In this paper, a full-Newton step infeasible interior-point method for solving linear optimization problems is presented. In each iteration, the algorithm uses only one so-called feasibility step and computes the feasibility search directions by using a trigonometric kernel function with a double barrier term. Convergence of the algorithm is proved and it is shown that the complexity bound of the algorithm matches the currently best known iteration bound for infeasible interior-point methods. Finally, some numerical results are provided to illustrate the performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. A corrector–predictor interior-point method with new search direction for linear optimization.
- Author
-
Darvay, Zs., Illés, T., Kheirfam, B., and Rigó, P. R.
- Subjects
INTERIOR-point methods ,SQUARE root ,ALGORITHMS - Abstract
We introduce a feasible corrector–predictor interior-point algorithm (CP IPA) for solving linear optimization problems which is based on a new search direction. The search directions are obtained by using the algebraic equivalent transformation (AET) of the Newton system which defines the central path. The AET of the Newton system is based on the map that is a difference of the identity function and square root function. We prove global convergence of the method and derive the iteration bound that matches best iteration bounds known for these types of methods. Furthermore, we prove the practical efficiency of the new algorithm by presenting numerical results. This is the first CP IPA which is based on the above mentioned search direction. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Minimum Cost Reconfigurable Network Template Design With Guaranteed QoS.
- Author
-
Xu, Xiaoli, Veitch, Darryl, Li, Yonghui, and Vucetic, Branka
- Subjects
- *
DESIGN templates , *GREEDY algorithms , *CONSTRUCTION costs , *ALGORITHMS , *SIGNAL processing , *LINEAR network coding - Abstract
Conventional networks are based on layered protocols with intensive cross-layer interactions and complex signal processing at every node, making it difficult to meet the ultra-low latency requirement of mission critical applications in future communication systems. In this paper, we address this issue by proposing the concept of network template, which allows data to flow through it at the transmission symbol level, with minimal node processing. This is achieved by carefully calibrating the inter-connecting links among the nodes and pre-calculating the routing/network coding actions for each node, according to a set of preconfigured flows. In this paper, we focus on the minimum cost network template design to minimize the connections within the template, while ensuring that all the pre-defined configurations are feasible with the guaranteed throughput, latency and reliability. We show that the minimum cost network template design problem is difficult to solve optimally in general. We thus propose an efficient greedy algorithm to find a close-to-optimal solution. Simulation results show that the construction cost of the templates obtained by the proposed algorithm is very close to a lower bound. Furthermore, the construction cost increases only slightly with the number of pre-defined configurations, which confirms the flexibility of the network template design. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. DEVELOPMENT OF A FORMAL ALGORITHM FOR THE FORMULATION OF A DUAL LINEAR OPTIMIZATION PROBLEM.
- Author
-
Chernova, Ld., Titov, S., Chernov, S., Kolesnikova, K., Chernova, Lb., and Gogunskii, V.
- Subjects
PROFIT maximization ,ALGORITHMS ,DUALITY theory (Mathematics) - Abstract
The rigorous formal algorithm for formulating a dual problem for different forms (general, basic, standard, and canonical) of a primal linear programming problem is proposed. First, definitions of a pair of dual problems for standard form of primal linear programming are given. This approach is based on the fact that such a pair was noted first, since it had substantial interpretation. The economic interpretation of the standard problem is profit maximization in the production and sale of some types of products. Such an approach substantially indicates the existence of the primal problem (I) and the strictly corresponding dual (conjugate) (II). The problem of cost minimization is accompanying to the primal problem. The basic concept of the duality theory in linear programming problems is the fact that a pair of problems are mutually conjugate — obtaining dual of dual leads to a primal problem. The rigorous approach to obtaining an algorithm for formulating a dual problem is based on the statement that the dual problem of dual is a primal (original) problem. This approach is used in the paper. For different pairs of dual problems, this statement is rigorously proved. The existing schemes of primal to dual conversion are substantial. Given this, the algorithm of the general approach to formulating pairs of conjugate problems is proposed and rigorously proved. Formalization of the developed scheme makes it easy to get pairs of known dual problems. This allowed for the first time to propose and validate the algorithm for constructing a dual problem for an arbitrary form of the primal problem. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. A Utility Theory Based Interactive Approach to Robustness in Linear Optimization.
- Author
-
Karimi, Mehdi, Moazeni, Somayeh, and Tunçel, Levent
- Subjects
ROBUST control ,MATHEMATICAL optimization ,DECISION making ,LINEAR programming ,ALGORITHMS - Abstract
We treat uncertain linear programming problems by utilizing the notion of weighted analytic centers and notions from the area of multi-criteria decision making. After introducing our approach, we develop interactive cutting-plane algorithms for robust optimization, based on concave and quasi-concave utility functions. In addition to practical advantages, due to the flexibility of our approach, we prove that under a theoretical framework proposed by Bertsimas and Sim (Oper Res 52:35-53,
2004 ), which establishes the existence of certain convex formulation of robust optimization problems, the robust optimal solutions generated by our algorithms are at least as desirable to the decision maker as any solution generated by many other robust optimization algorithms in the theoretical framework. We present some probabilistic bounds for feasibility of robust solutions and evaluate our approach by means of computational experiments. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
12. COMPLEXITY ANALYSIS OF INTERIOR POINT METHODS FOR LINEAR PROGRAMMING BASED ON A PARAMETERIZED KERNEL FUNCTION.
- Author
-
BOUAFIA, MOUSAAB, BENTERKI, DJAMEL, and YASSINE, ADNAN
- Subjects
KERNEL functions ,INTEGER programming ,INTERIOR-point methods ,LOGARITHMIC functions ,ALGORITHMS - Abstract
Kernel function plays an important role in defining new search directions for primaldual interior point algorithm for solving linear optimization problems. This problem has attracted the attention of many researchers for some years. The goal of their works is to find kernel functions that improve algorithmic complexity of this problem. In this paper, we introduce a real parameter p > 0 to generalize the kernel function (5) given by Bai et al. in [Y.Q. Bai, M El Ghami and C. Roos, SIAM J. Optim. 15 (2004) 101-128.], and give the corresponding primal-dual interior point methods for linear optimization. This parameterized kernel function yields the similar complexity bound given in [Y.Q. Bai, M El Ghami and C. Roos, SIAM J. Optim. 15 (2004) 101-128.] for both large-update and small-update methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
13. Secure Overlay Routing Using Key Pre-Distribution: A Linear Distance Optimization Approach.
- Author
-
Gharib, Mohammed, Yousefi'zadeh, Homayoun, and Movaghar, Ali
- Subjects
ALGORITHMS ,ROUTING (Computer network management) ,CRYPTOGRAPHY ,LINEAR programming ,NETWORK performance ,POLYNOMIALS - Abstract
Key pre-distribution algorithms have recently emerged as efficient alternatives of key management in today’s secure communications landscape. Secure routing techniques using key pre-distribution algorithms require special algorithms capable of finding optimal secure overlay paths. To the best of our knowledge, the literature of key pre-distribution systems is still facing a major void in proposing optimal overlay routing algorithms. In the literature work, traditional routing algorithms are typically used twice to find a NETWORK layer path from the source node to the destination and then to find required cryptographic paths. In this paper, we model the problem of secure routing using weighted directed graphs and propose a Boolean linear programming (LP) problem to find the optimal path. Albeit the fact that the solutions to Boolean LP problems are of much higher complexities, we propose a method for solving our problem in polynomial time. In order to evaluate its performance and security measures,we apply our proposed algorithm to a number of recently proposed symmetric and asymmetric key pre-distribution methods. The results show that our proposed algorithm offers great network performance improvements as well as security enhancements when augmenting baseline techniques. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
14. A full-Newton step infeasible interior-point method for linear optimization based on a trigonometric kernel function.
- Author
-
Kheirfam, B. and Haghighi, M.
- Subjects
- *
INTERIOR-point methods , *ITERATIVE methods (Mathematics) , *MATHEMATICAL optimization , *ALGORITHMS , *TRIGONOMETRIC functions , *KERNEL functions - Abstract
An infeasible interior-point method (IIPM) for solving linear optimization problems based on a kernel function with trigonometric barrier term is analysed. In each iteration, the algorithm involves a feasibility step and several centring steps. The centring step is based on classical Newton’s direction, while we used a kernel function with trigonometric barrier term in the algorithm to induce the feasibility step. The complexity result coincides with the best-known iteration bound for IIPMs. To our knowledge, this is the first full-Newton step IIPM based on a kernel function with trigonometric barrier term. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. Efficient edge-skeleton computation for polytopes defined by oracles.
- Author
-
Emiris, Ioannis Z., Fisikopoulos, Vissarion, and Gärtner, Bernd
- Subjects
- *
POLYTOPES , *ALGORITHMS , *MATHEMATICAL optimization , *POLYNOMIALS , *MACHINE molding (Founding) , *MATHEMATICAL bounds - Abstract
In general dimension, there is no known total polynomial algorithm for either convex hull or vertex enumeration, i.e. an algorithm whose complexity depends polynomially on the input and output sizes. It is thus important to identify problems and polytope representations for which total polynomial-time algorithms can be obtained. We offer the first total polynomial-time algorithm for computing the edge-skeleton—including vertex enumeration—of a polytope given by an optimization or separation oracle, where we are also given a superset of its edge directions. We also offer a space-efficient variant of our algorithm by employing reverse search. All complexity bounds refer to the (oracle) Turing machine model. There is a number of polytope classes naturally defined by oracles; for some of them neither vertex nor facet representation is obvious. We consider two main applications, where we obtain (weakly) total polynomial-time algorithms: Signed Minkowski sums of convex polytopes, where polytopes can be subtracted provided the signed sum is a convex polytope, and computation of secondary, resultant, and discriminant polytopes. Further applications include convex combinatorial optimization and convex integer programming, where we offer a new approach, thus removing the complexity's exponential dependence in the dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
16. A modified PSO algorithm for linear optimization problem subject to the generalized fuzzy relational inequalities with fuzzy constraints (FRI-FC).
- Author
-
Ghodousian, Amin and Parvari, Maryam Raeisian
- Subjects
- *
PARTICLE swarm optimization , *ALGORITHMS , *FUZZY logic , *MATHEMATICAL inequalities , *PROBLEM solving - Abstract
In this paper, optimization of a linear objective function subject to a generalized fuzzy relational inequalities is investigated in which an arbitrary continuous t-norm is considered as fuzzy composition, and fuzzy inequality replaces ordinary inequality in the constraints. Unlike most optimization algorithms, in fuzzy relational inequalities with fuzzy constraints (FRI-FC) we find a near-feasible solution having a better objective value than those resulting from the resolution of the similar problems with crisp (ordinary) inequality constraints. Such solutions are called super-optima in this paper. Subsequently, an algorithm is proposed to find a super-optimum with pre-specified desirable infeasibility. For this purpose, we firstly study some structural properties of the FRI-FC problem and present a new formulization that is independent of t-norms used in the constraints of the problem. This new formulation converts the primary problem into an equivalent problem with simple constraints without considering any penalty parameters. However, it is proved that the transformed equivalent problem enables our algorithm to distinguish unfavorable points and generate a sequence of solutions converging to a super-optimum under a certain sufficient condition. Finally, a modified PSO is presented in which the ability of PSO to solve unconstrained problems with continuous domain, the structure of the transformed problem and some fuzzy structural modifications are combined and form an efficient algorithm to solve the generalized FRI-FC problems. The modified PSO algorithm has been applied to the generalized FRI-FC problem defined with ten well-known continuous t-norms. Additionally, an idea of the FRI-FC problems as outer approximators for FRI problems with ordinary inequalities is also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. ANALYSIS OF COMPLEXITY OF PRIMAL-DUAL INTERIOR-POINT ALGORITHMS BASED ON A NEW KERNEL FUNCTION FOR LINEAR OPTIMIZATION.
- Author
-
SIQI LI and WEIYI QIAN
- Subjects
ALGORITHMS ,KERNEL functions - Abstract
Kernel functions play an important role in defining new search directions for primal-dual interior-point algorithm. In this paper, a new kernel function which its barrier term is integral type is proposed. We study the properties of the new kernel function, and give a primal-dual interior-point algorithm for solving linear optimization based on the new kernel function. Polynomial complexity of algorithm is analyzed. The iteration bounds both for large-update and for small-update methods are obtained, respectively. The iteration bound for small-update method is the best known complexity bound. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
18. AN IMPROVED AND SIMPLIFIED FULL-NEWTON STEP O(N) INFEASIBLE INTERIOR-POINT METHOD FOR LINEAR OPTIMIZATION.
- Author
-
ROOS, C.
- Subjects
- *
MATHEMATICAL optimization , *NEWTON-Raphson method , *FUNCTION algebras , *ALGORITHMS , *PERTURBATION theory , *MATHEMATICAL inequalities - Abstract
We present an improved version of an infeasible interior-point method for linear optimization published in 2006. In the earlier version each iteration consisted of one so-called feasibility step and a few--at most three--centering steps. In this paper each iteration consists of only a feasibility step, whereas the iteration bound improves the earlier bound by a factor 2 √2. The improvements are due to a new lemma that gives a much tighter upper bound for the proximity after the feasibility step. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. AN INFEASIBLE FULL-NEWTON STEP ALGORITHM FOR LINEAR OPTIMIZATION WITH ONE CENTERING STEP IN MAJOR ITERATION.
- Author
-
DARVAY, ZSOLT, PAPP, INGRID-MAGDOLNA, and TAKÁCS, PETRA-RENÁTA
- Subjects
INTERIOR-point methods ,MATHEMATICAL optimization ,POLYNOMIAL time algorithms ,COMPUTATIONAL complexity ,ALGORITHMS - Abstract
Recently, Roos proposed a full-Newton step infeasible interior- point method (IIPM) for solving linear optimization (LO) problems. Later on, more variants of this algorithm were published. However, each main step of these methods is composed of one feasibility step and several centering steps. The purpose of this paper is to prove that by using a new search direction it is enough to take only one centering step in order to ob- tain a polynomial-time method. This algorithm has the same complexity as the best known IIPMs. [ABSTRACT FROM AUTHOR]
- Published
- 2014
20. Copositivity detection by difference-of-convex decomposition and ω-subdivision.
- Author
-
Bomze, Immanuel and Eichfelder, Gabriele
- Subjects
- *
CONVEX functions , *MATHEMATICAL decomposition , *ALGORITHMS , *MATHEMATICAL optimization , *MATHEMATICAL analysis , *REAL variables - Abstract
We present three new copositivity tests based upon difference-of-convex (d.c.) decompositions, and combine them to a branch-and-bound algorithm of ω-subdivision type. The tests employ LP or convex QP techniques, but also can be used heuristically using appropriate test points. We also discuss the selection of efficient d.c. decompositions and propose some preprocessing ideas based on the spectral d.c. decomposition. We report on first numerical experience with this procedure which are very promising. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
21. Foundations of Discrete Optimization: In Transition from Linear to Non-linear Models and Methods.
- Author
-
Loera, Jesús, Hemmecke, Raymond, and Köppe, Matthias
- Abstract
Optimization is a vibrant growing area of Applied Mathematics. Its many successful applications depend on efficient algorithms and this has pushed the development of theory and software. In recent years there has been a resurgence of interest to use 'non-standard' techniques to estimate the complexity of computation and to guide algorithm design. New interactions with fields like algebraic geometry, representation theory, number theory, combinatorial topology, algebraic combinatorics, and convex analysis have contributed non-trivially to the foundations of computational optimization. In this expository survey we give three example areas of optimization where 'algebraic-geometric thinking' has been successful. One key point is that these new tools are suitable for studying models that use non-linear constraints together with combinatorial conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
22. A constraint-reduced variant of Mehrotra's predictor-corrector algorithm.
- Author
-
Winternitz, Luke, Nicholls, Stacey, Tits, André, and O'Leary, Dianne
- Subjects
LINEAR differential equations ,ALGORITHMS ,STOCHASTIC convergence ,LINEAR systems ,NUMERICAL analysis - Abstract
Consider linear programs in dual standard form with n constraints and m variables. When typical interior-point algorithms are used for the solution of such problems, updating the iterates, using direct methods for solving the linear systems and assuming a dense constraint matrix A, requires $\mathcal{O}(nm^{2})$ operations per iteration. When n≫ m it is often the case that at each iteration most of the constraints are not very relevant for the construction of a good update and could be ignored to achieve computational savings. This idea was considered in the 1990s by Dantzig and Ye, Tone, Kaliski and Ye, den Hertog et al. and others. More recently, Tits et al. proposed a simple 'constraint-reduction' scheme and proved global and local quadratic convergence for a dual-feasible primal-dual affine-scaling method modified according to that scheme. In the present work, similar convergence results are proved for a dual-feasible constraint-reduced variant of Mehrotra's predictor-corrector algorithm, under less restrictive nondegeneracy assumptions. These stronger results extend to primal-dual affine scaling as a limiting case. Promising numerical results are reported. As a special case, our analysis applies to standard (unreduced) primal-dual affine scaling. While we do not prove polynomial complexity, our algorithm allows for much larger steps than in previous convergence analyses of such algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
23. A new infeasible interior-point algorithm with full step for linear optimization based on a simple function.
- Author
-
Zhang, Lipu and Xu, Yinghong
- Subjects
- *
MATHEMATICAL optimization , *UNIVALENT functions , *ALGORITHMS , *COMPUTATIONAL complexity , *FEASIBILITY studies , *MATHEMATICAL analysis - Abstract
In this paper, we design and analyse an infeasible interior-point algorithm based on a simple function for linear optimization. The infeasible algorithm contains two types of search directions: the feasibility search direction and the centrality search direction. Both of the directions are determined by the simple function. The algorithm uses full step, thus no need to perform the line-search procedure. Although the proposed function is simple, as it will be shown, the induced infeasible algorithm enjoys the best-known iteration complexity for infeasible interior-point algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
24. An interior-point algorithm for linear optimization based on a new barrier function
- Author
-
Cho, Gyeong-Mi
- Subjects
- *
CONSTRAINED optimization , *ALGORITHMS , *INTERIOR-point methods , *COMPUTATIONAL complexity , *LINEAR systems , *ITERATIVE methods (Mathematics) - Abstract
Abstract: Primal–dual interior-point methods (IPMs) are the most efficient methods for a computational point of view. At present the theoretical iteration bound for small-update IPMs is better than the one for large-update IPMs. However, in practice, large-update IPMs are much more efficient than small-update IPMs. Peng et al. proposed new variants of IPMs based on self-regular barrier functions and proved so far the best known complexity, e.g. , for large-update IPMs with some specific self-regular barrier functions. Recently, Roos et al. proposed new primal–dual IPMs based on various barrier functions to improve the iteration bound for large-update methods from to . Motivated by their works we define a new barrier function and propose a new primal–dual interior point algorithm based on this function for linear optimization (LO) problems. We show that the new algorithm has iteration bound for large-update and for small-update methods which are currently the best known bounds, respectively. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
25. COUNTEREXAMPLE TO A CONJECTURE ON AN INFEASIBLE INTERIOR-POINT METHOD.
- Author
-
GU, G. and ROOS, C.
- Subjects
- *
ALGORITHMS , *COMPUTATIONAL complexity , *LINEAR programming , *BOUNDARY value problems , *MATHEMATICAL optimization - Abstract
In [SIAM J. Optim., 16 (2006), pp. 1110-1136], Roos proved that the devised fullstep infeasible algorithm has O(n) worst-case iteration complexity. This complexity bound depends linearly on a parameter Κ̄(&zata;, which is proved to be less than √2n. Based on extensive computational evidence (hundreds of thousands of randomly generated problems), Roos conjectured that Κ̄(ζ) = 1 (Conjecture 5.1 in the above-mentioned paper), which would yield an O(√n) iteration full-Newton step infeasible interior-point algorithm. In this paper we present an example showing that Κ̄(ζ) is in the order of √n, the same order as that proved in Roos's original paper. In other words, the conjecture is false. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
26. A new kernel function yielding the best known iteration bounds for primal-dual interior-point algorithms.
- Author
-
Yan Qin Bai, Jin Li Guo, and Roos, Cornelis
- Subjects
- *
KERNEL functions , *ALGORITHMS , *MATHEMATICAL optimization , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Kernel functions play an important role in defining new search directions for primal-dual interior-point algorithm for solving linear optimization problems. In this paper we present a new kernel function which yields an algorithm with the best known complexity bound for both large- and small-update methods. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
27. A SELF-REGULAR NEWTON BASED ALGORITHM FOR LINEAR OPTIMIZATION.
- Author
-
Salahi, M.
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *NEWTON-Raphson method , *MATHEMATICAL analysis , *SIMULATION methods & models , *SYSTEM analysis , *COMBINATORIAL optimization - Abstract
In this paper, using the framework of self-regularity, we propose a hybrid adaptive algorithm for the linear optimization problem. If the current iterates are far from a central path, the algorithm employs a self-regular search direction, otherwise the classical Newton search direction is employed. This feature of the algorithm allows us to prove a worst case iteration bound. Our result matches the best iteration bound obtained by the pure self-regular approach and improves on the worst case iteration bound of the classical algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
28. Towards auction algorithms for large dense assignment problems.
- Author
-
Buš, Libor and Tvrdík, Pavel
- Subjects
CASE-based reasoning ,ALGORITHMS ,BID price ,PROBLEM solving ,COMMERCIAL law ,AUCTIONS ,PRICE quotations - Abstract
In this paper, we focus on the problem of solving large-scale instances of the linear sum assignment problem by auction algorithms. We introduce a modified auction algorithm, called look-back auction algorithm, which extends the forward auction algorithm by the ability of reusing information from previous bids. We show that it is able to reuse information from the previous bids with high efficiency for all tested types of input instances. We discuss then the design and implementation of a suite of sequential and distributed memory auction algorithms on a Linux cluster with the evaluation on several types of input instances of the linear sum assignment problem. Our results show that the look-back auction algorithm solves sequentially nearly all types of dense instances faster than other evaluated algorithms and it is more stable than the forward-reverse auction algorithm for sparse instances. Our distributed memory auction algorithms are fully memory scalable. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
29. Optimization of linear objective function with max-t fuzzy relation equations.
- Author
-
Thapar, Antika, Pandey, Dhaneshwar, and Gaur, S.K.
- Subjects
MATHEMATICAL programming ,ALGORITHMS ,FUNCTIONAL equations ,MATHEMATICAL optimization - Abstract
Abstract: An optimization model with a linear objective function subject to max-t fuzzy relation equations as constraints is presented, where t is an Archimedean t-norm. Since the non-empty solution set of the fuzzy relation equations is in general a non-convex set, conventional linear programming methods are not suitable for solving such problems. The concept of covering problem is applied to establish 0–1 integer programming problem equivalent to linear programming problem and a binary coded genetic algorithm is proposed to obtain the optimal solution. An example is given for illustration of the method. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
30. On convergent numerical algorithms for unsymmetric collocation.
- Author
-
Lee, Cheng-Feng, Ling, Leevan, and Schaback, Robert
- Subjects
- *
RADIAL basis functions , *STOCHASTIC convergence , *ALGORITHMS , *COLLOCATION methods , *MATHEMATICS - Abstract
In this paper, we are interested in some convergent formulations for the unsymmetric collocation method or the so-called Kansa’s method. We review some newly developed theories on solvability and convergence. The rates of convergence of these variations of Kansa’s method are examined and verified in arbitrary–precision computations. Numerical examples confirm with the theories that the modified Kansa’s method converges faster than the interpolant to the solution; that is, exponential convergence for the multiquadric and Gaussian radial basis functions (RBFs). Some numerical algorithms are proposed for efficiency and accuracy in practical applications of Kansa’s method. In double–precision, even for very large RBF shape parameters, we show that the modified Kansa’s method, through a subspace selection using a greedy algorithm, can produce acceptable approximate solutions. A benchmark algorithm is used to verify the optimality of the selection process. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
31. A HYBRID ADAPTIVE ALGORITHM FOR LINEAR OPTIMIZATION.
- Author
-
Salahi, Maziar and Terlaky, Tamás
- Subjects
ALGORITHMS ,COMPUTER programming ,MATHEMATICAL optimization ,OPERATIONS research ,LAGRANGIAN functions ,PROXIMITY spaces ,MANAGEMENT science ,NETWORK analysis (Planning) ,TOPOLOGY - Abstract
Recently, using the framework of self-regularity, Salahi in his Ph.D. thesis proposed an adaptive single step algorithm which takes advantage of the current iterate information to find an appropriate barrier parameter rather than using a fixed fraction of the current duality gap. However, his algorithm might do at most one bad step after each good step in order to keep the iterate in a certain neighborhood of the central path. In this paper, using the same framework, we propose a hybrid adaptive algorithm. Depending on the position of the current iterate, our new algorithm uses either the classical Newton search direction or a self-regular search direction. The larger the distance from the central path, the larger the barrier degree of the self-regular search direction is. Unlike the classical approach, here we control the iterates by guaranteeing certain reduction of the proximity measure. This itself leads to a one dimensional equation which determines the target barrier parameter at each iteration. This allows us to have a large update algorithm without any need for safeguard or special steps. Finally, we prove that our hybrid adaptive algorithm has an O (n 71/100 log on χ
0T s0 /ϵ) worst case iteration complexity. [ABSTRACT FROM AUTHOR]- Published
- 2009
- Full Text
- View/download PDF
32. Mehrotra-type predictor-corrector algorithm revisited.
- Author
-
Salahi, Maziar and Terlaky, Tamás
- Subjects
- *
NUMERICAL analysis , *ALGORITHMS , *MATHEMATICAL optimization , *LINEAR statistical models , *POLYNOMIALS - Abstract
Motivated by a numerical example that shows that a feasible version of Mehrotra's original predictor-corrector algorithm might be inefficient in practice, Salahi et al. [M. Salahi, J. Peng and T. Terlaky, On Mehrotra-type predictor-corrector algorithms, to apper in SIAM J. Optim.] proposed a so-called safeguard-based variant of the algorithm that enjoys polynomial iteration complexity, although its practical efficiency is preserved. In this paper, we analyse the same Mehrotra's algorithm from a different perspective. We give a condition on the maximum step size in the predictor direction, the violation of which might imply a very small or zero step size in the corrector direction of the algorithm. This might explain the reason for occasional ill behaviour of the feasible version of Mehrotra's original algorithm. We propose to cut the maximum step size in the predictor direction if it is above a certain threshold. If this cut does not give a desirable step size, then we cut it for the second time that allows us to give a lower bound for the step size in the corrector direction. This enables us to prove an O(n5/2log (n/ε)) worst case iteration complexity bound for the new algorithm. By slightly modifying the Newton system in the corrector step, we reduce the iteration complexity to O (n3/2log (n/ε)). Finally, we report some illustrative computational results using the McIPM software package. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
33. WATERMILL: An Optimized Fingerprinting System for Databases under Constraints.
- Author
-
Lafaye, Julien, Gross-Amblard, David, Constantin, Camelia, and Guerrouani, Meryem
- Subjects
- *
DIGITAL watermarking , *HUMAN fingerprints , *DATA encryption , *DATABASES , *LINEAR programming , *PROBABILITY theory , *ALGORITHMS , *OPEN source software , *COMPUTER science - Abstract
This paper presents a watermarking/fingerprinting system for relational databases. It features a built-in declarative language to specify usability constraints that watermarked data sets must comply with. For a subset of these constraints, namely, weight-independent constraints, we propose a novel watermarking strategy that consists of translating them into an integer linear program. We show two watermarking strategies: an exhaustive one based on integer linear programming constraint solving and a scalable pairing heuristic. Fingerprinting applications, for which several distinct watermarks need to be computed, benefit from the reduced computation time of our method that precomputes the watermarks only once. Moreover, we show that our method enables practical collusion-secure fingerprinting since the precomputed watermarks are based on binary alterations located at exactly the same positions. The paper includes an in-depth analysis of false-hit and false-miss occurrence probabilities for the detection algorithm. Experiments performed on our open source software WATERMILL assess the watermark robustness against common attacks and show that our method outperforms the existing ones concerning the watermark embedding speed. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
34. Primal-dual Interior-point Algorithms for Second-order Cone Optimization Based on a New Parametric Kernel Function.
- Author
-
Yan Qin Bai and Guo Qiang Wang
- Subjects
- *
KERNEL functions , *COMPLEX variables , *GEOMETRIC function theory , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
A class of polynomial primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function, with parameters p and q, is presented. Its growth term is between linear and quadratic. Some new tools for the analysis of the algorithms are proposed. The complexity bounds of $$ O{\left( {{\sqrt N }\log N\log \frac{N} {\varepsilon }} \right)} $$ for large-update methods and $$ O{\left( {{\sqrt N }\log \frac{N} {\varepsilon }} \right)} $$ for small-update methods match the best known complexity bounds obtained for these methods. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p and q. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
35. ON MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHMS.
- Author
-
Salahi, M., Peng, J., and Terlaky, T.
- Subjects
- *
ALGORITHMS , *FOUNDATIONS of arithmetic , *MATHEMATICAL optimization , *MATHEMATICAL analysis , *NONSMOOTH optimization , *CONSTRAINED optimization , *LINEAR algebra , *STOCHASTIC convergence - Abstract
In this paper we discuss the polynomiality of a feasible version of Mehrotra's predictor-corrector algorithm whose variants have been widely used in several interior point method (IPM)-based optimization packages. A numerical example is given that shows that the adaptive choice of centering parameter and correction terms in this algorithm may lead to small steps being taken in order to keep the iterates in a large neighborhood of the central path, which is important for proving polynomial complexity properties of this method. Motivated by this example, we introduce a safeguard in Mehrotra's algorithm that keeps the iterates in the prescribed neighborhood and allows us to obtain a positive lower bound on the step size. This safeguard strategy is also used when the affine scaling direction performs poorly. We prove that the safeguarded algorithm will terminate after at most O(n2log(x°}T s° /e) iterations. By modestly modifying the corrector direction, we reduce the iteration complexity to O(nlog(x°}T s° /e). To ensure fast asymptotic convergence of the algorithm, we changed Mehrotra's updating scheme of the centering parameter slightly while keeping the safeguard. The new algorithms have the same order of iteration complexity as the safeguarded algorithms but enjoy superlinear convergence as well. Numerical results using the McIPM and LIPSOL software packages are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2007
36. A finite termination Mehrotra-type predictor–corrector algorithm
- Author
-
Salahi, Maziar
- Subjects
- *
ALGORITHMS , *TECHNICAL literature , *UNIVERSITIES & colleges - Abstract
Abstract: Mehrotra-type predictor–corrector algorithms are the backbone of most Interior Point Methods based software packages. Salahi et al. [M. Salahi, J. Peng, T. Terlaky, On Mehrotra-type predictor–corrector algorithms, Technical Report 2005/4, Advanced Optimization Lab., McMaster University, Hamilton, ON, Canada. http://www.cas.mcmaster.ca/~oplab/publication] in their recent works have shown some ill behaviors of Mehrotra’s original algorithm which motivated them to modify it in order to achieve the polynomial iteration complexity while preserving its practical efficiency. In this paper we analyze the same algorithm from a different perspective and give a condition on the maximum feasible step size in the predictor step, violation of which might lead to a very small or even zero step size in the corrector step. If the maximum step size in the predictor step is above a certain threshold, then we cut it to satisfy the derived condition. This enables us to prove that the algorithm terminates finitely. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
37. Simplified O(nL) infeasible interior-point algorithm for linear optimization using full-Newton steps.
- Author
-
Mansouri, H. and Roos, C.
- Subjects
- *
MATHEMATICAL optimization , *NEWTON-Raphson method , *ITERATIVE methods (Mathematics) , *ALGORITHMS , *POLYNOMIALS , *NUMERICAL analysis , *MATHEMATICAL analysis , *MATHEMATICS , *MAXIMA & minima - Abstract
In Roos [Roos, C., 2006, A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM Journal on Optimization, 16(4), 1110-1136.] presented a new primal-dual infeasible interior-point algorithm that uses full-Newton steps and whose iteration bound coincides with the best-known bound for infeasible interior-point algorithms. Each iteration consists of a step that restores the feasibility for an intermediate problem (the so-called feasibility step) and a few (usual) centering steps. No more than O(nlog(n/ε)) iterations are required for getting an ε-solution of the problem at hand, which coincides with the best-known bound for infeasible interior-point algorithms. In this article, we introduce a different feasibility step and show that the same complexity result can be obtained with a relatively simpler analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
38. Adaptive Large-Neighborhood Self-Regular Predictor-Corrector Interior-Point Methods for Linear Optimization.
- Author
-
Salahi, M. and Terlaky, T.
- Subjects
- *
INTERIOR-point methods , *MATHEMATICAL programming , *MATHEMATICAL optimization , *LINEAR programming , *ITERATIVE methods (Mathematics) , *FUNCTIONAL equations , *ALGORITHMS , *PROXIMITY spaces , *AFFINE algebraic groups - Abstract
It is known that predictor-corrector methods in a large neighborhood of the central path are among the most efficient interior-point methods (IPMs) for linear optimization (LO) problems. The best iteration bound based on the classical logarithmic barrier function isO(n log(n/ϵ)). In this paper, we propose a family of self-regular proximity-based predictor-corrector (SRPC) IPMs for LO in a large neighborhood of the central path. In the predictor step, we use either an affine scaling or a self-regular direction; in the corrector step, we use always a self-regular direction. Our new algorithms use a special proximity function with different search directions and thus allows us to improve the so far best theoretical iteration complexity for a family of SRPC IPMs. An O(√n exp((1 - q + log n)/2) log n log(n/ϵ)) worst-case iteration bound with superlinear convergence is established, where q is the barrier degree of the SR proximity function. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
39. Polynomial time second order Mehrotra-type predictor–corrector algorithms
- Author
-
Salahi, Maziar and Mahdavi-Amiri, Nezam
- Subjects
- *
ALGORITHMS , *TECHNICAL literature , *SOCIAL groups , *INTEGRATED software - Abstract
Abstract: Salahi et al. [M. Salahi, J. Peng, T. Terlaky, On Mehrtora type predictor–corrector algorithms, Technical Report 2005/4, Advanced Optimization Lab, Department of Computing and Software, McMaster University, http://www.cas.mcmaster.ca/~oplab/publication, SIAM Journal on Optimization, submitted for publication] give a numerical example showing that Mehrotra’s original predictor–corrector algorithm, which is the basis of interior point methods software packages, may be very inefficient in practice. This motivated Salahi et al. to come up with a safeguarded algorithm that enjoys a polynomial iteration complexity and is efficient in practice. Here we discuss a variation of Mehrotra’s second order predictor–corrector algorithm [S. Mehrotra, On the implementation of a (primal–dual) interior point method, SIAM Journal on Optimization 2 (1992) 575–601] and use the example of Salahi et al. to show that the algorithm may have to take very small steps in order to remain in a certain neighborhood of the central path and subsequently needs excessively large number of iterations to stop. We then introduce a safeguard that guarantees a lower bound for the maximum step size in the corrector step of the algorithm and subsequently a polynomial number of iterations. A second modification of algorithm is proposed which enjoys even a better iteration complexity. Some limited encouraging computational results are reported. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
40. A FULL-NEWTON STEP O(n) INFEASIBLE INTERIOR-POINT ALGORITHM FOR LINEAR OPTIMIZATION.
- Author
-
Roos, C.
- Subjects
- *
MATHEMATICAL optimization , *INTERIOR-point methods , *ALGORITHMS , *ITERATIVE methods (Mathematics) , *PERTURBATION theory - Abstract
We present a primal-dual infeasible interior-point algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists, it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(n log(n/ε)) iteration bound for getting an ε-solution of the problem at hand, which coincides with the best known bound for infeasible interior-point algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only full-Newton steps. Two types of full-Newton steps are used, so-called feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close to its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
41. A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis.
- Author
-
Castillo, Enrique, Guijarro-Berdiñas, Bertha, Fontenla-Romero, Oscar, Alonso-Betanzos, Amparo, and Bengio, Yoshua
- Subjects
- *
LEARNING , *ALGORITHMS , *EQUATIONS , *LEAST squares , *STOCHASTIC convergence - Abstract
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, which use the weights in each of the layers; the process is repeated until convergence. Since these weights are learnt solving a linear system of equations, there is an important saving in computational time. The method also gives the local sensitivities of the least square errors with respect to input and output data, with no extra computational cost, because the necessary information becomes available without extra calculations. This method, called the Sensitivity-Based Linear Learning Method, can also be used to provide an initial set of weights, which significantly improves the behavior of other learning algorithms. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with several learning algorithms and well known data sets. The results have shown a learning speed generally faster than other existing methods. In addition, it can be used as an initialization tool for other well known methods with significant improvements. [ABSTRACT FROM AUTHOR]
- Published
- 2006
42. Complexity analysis of interior-point methods for linear optimization based on some conditions on kernel function
- Author
-
Amini, K. and Peyghami, M.R.
- Subjects
- *
MATHEMATICAL optimization , *LINEAR systems , *KERNEL functions , *ALGORITHMS - Abstract
Abstract: Interior point methods have shown their powers in solving linear optimization problems and large classes of other optimization problems. However, at present there is still a gap between the practical behavior of these algorithms and their theoretical worst case complexity. The so-called large update interior point methods perform in practice much better than the small update methods which have the best known theoretical complexity. Recently, this gap has been reduced by Peng, Roos and Terlaky by introducing new self regular kernel functions. In this paper, by focusing on linear optimization problem and motivated by the self regular family of kernel functions, we impose some mild condition on the kernel functions and we give a new class of kernel functions. We also give a simple complexity analysis for large-update interior point methods based on the kernel functions in this new class. Finally, we apply our analysis to two family of kernel functions in our new class. We also explore the complexity of the algorithm and we show that the so far worst case iteration bound can be achieved in special case. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
43. The Complexity of Self-Regular Proximity Based Infeasible IPMs.
- Author
-
Maziar Salahi, Tamas Terlaky, and Guoqing Zhang
- Subjects
MATHEMATICAL optimization ,INTERIOR-point methods ,MATHEMATICAL programming ,FUNCTIONAL equations ,MATHEMATICAL analysis ,NUMERICAL analysis ,CALCULUS of variations ,ALGORITHMS ,POLYNOMIALS - Abstract
Primal-Dual Interior-Point Methods (IPMs) have shown their power in solving large classes of optimization problems. In this paper a self-regular proximity based Infeasible Interior Point Method (IIPM) is proposed for linear optimization problems. First we mention some interesting properties of a specific self- regular proximity function, studied recently by Peng and Terlaky. and use it to define infeasible neighborhoods. These simple but interesting properties of the proximity function indicate that, when the current iterate is in a large neighborhood of the central path, large-update IIPMs emerge as the only natural choice. Then, we apply these results to design a specific self-regularity based dynamic large-update IIPM in large neighborhood. The new dynamic 11PM always takes large-updates and does not utilize any inner iteration to get centered. An O(n² log n/∊) worst-case iteration bound of the algorithm is established. Finally, we report the main results of our computational experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
44. A Predictor-Corrector Algorithm for Linear Optimization Based on a Specific Self-Regular Proximity Function.
- Author
-
Jiming Peng, Terlaky, Tamas, and Yunbin Zhao
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *PROXIMITY spaces , *MATHEMATICAL functions , *INTERIOR-point methods - Abstract
It is well known that the so-called first-order predictor-corrector methods working in a large neighborhood of the central path are among the most efficient interior-point methods (IPMs) for linear optimization (LO) problems. However, the best known iteration complexity of this type of method is $O(n \log\frac{(x^0)^Ts^0}{\varepsilon})$. It is of interest to investigate whether the complexity of first-order predictor-corrector type methods can be further improved. In this paper, based on a specific self-regular proximity function, we define a new large neighborhood of the central path. In particular, we show that the new neighborhood matches the standard large neighborhood that is defined by the infinity norm and widely used in the IPM literature. A new first-order predictor-corrector method for LO that uses a search direction induced by self-regularity in corrector steps is proposed. We prove that our predictor-corrector algorithm, working in a large neighborhood, has an $O(\sqrt{n}\log n \log\frac{(x^0)^Ts^0}{\varepsilon})$ iteration bound. Local superlinear convergence of the algorithm is also established. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. An adaptive self-regular proximity based large-update IPM for LO.
- Author
-
Salahi, Maziar and Terlaky, Tamás
- Subjects
- *
ALGORITHMS , *MATHEMATICAL programming , *MATHEMATICAL optimization , *IRREGULARITIES of distribution (Number theory) , *PROXIMITY spaces , *MATHEMATICAL analysis - Abstract
Primal-dual interior-point methods (IPMs) have shown their power in solving large classes of optimization problems. However, there is still a discrepancy between the practical behavior of these algorithms and their theoretical worst-case complexity results with respect to the update strategies of the dua-lity gap parameter in the algorithm. Recently, this discrepancy was significantly reduced by Peng, J., Roos, C. and Terlaky, T., 2002, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms (Princeton, NJ: Princeton University Press) who introduced a new family of self-regular (SR)-proximity functions based IPMs. In this paper, based on a class of SR proximities, we propose an adaptive single-step large-update SR-IPM that is very close to what is used in the McIPM software package. At each step, our algorithm always chooses a large-update of the target value adaptively depending on the position of the current iterate. This adaptive choice of the target value is different from the one what is presented in Peng, J., Roos, C. and Terlaky, T., 2002, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms (Princeton, NJ: Princeton University Press), where the target μ value is reduced by a fix factor when the iterate is sufficiently well centered. An 𝒪( qn ( q +1)/2 q log( n /ℇ)) worst-case iteration bound of the algorithm is established, where q is the barrier degree of the SR-proximity. For q = log n , our algorithm achieves the so far best complexity for large-update IPMs. Email: salahim@math.mcmaster.ca [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
46. COMPARATIVE STUDY OF KERNEL FUNCTIONS FOR PRIMAL-DUAL INTERIOR-POINT ALGORITHMS IN LINEAR OPTIMIZATION.
- Author
-
Bai, Y. Q., El Ghami, M., and Roos, C.
- Subjects
- *
KERNEL functions , *MATHEMATICAL optimization , *INTERIOR-point methods , *ALGORITHMS , *ITERATIVE methods (Mathematics) - Abstract
Recently, so-called self-regular barrier functions for primal-dual interior-point methods (IPMs) for linear optimization were introduced. Each such barrier function is determined by its (univariate) self-regular kernel function. We introduce a new class of kernel functions. The class is defined by some simple conditions on the kernel function and its derivatives. These properties enable us to derive many new and tight estimates that greatly simplify the analysis of IPMs based on these kernel functions. In both the algorithm and its analysis we use a single neighborhood of the central path; the neighborhood naturally depends on the kernel function. An important conclusion is that inverse functions of suitable restrictions of the kernel function and its first derivative more or less determine the behavior of the corresponding IPMs. Based on the new estimates we present a simple and unified computational scheme for the complexity analysis of kernel function in the new class. We apply this scheme to seven specific kernel functions. Some of these functions are self-regular, and others are not. One of the functions differs from the others, and from all self-regular functions, in the sense that its growth term is linear. Iteration bounds for both large-and small-update methods are derived. It is shown that small-update methods based on the new kernel functions all have the same complexity as the classical primal-dual IPM, namely, O (... n log n/ε). For large-update methods the best obtained bound is O (... n (log n) log n/ε), which until now has been the best known bound for such methods. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
47. A polynomial-time algorithm for linear optimization based on a new simple kernel function.
- Author
-
Bai*, Y. Q. and Roos, C.
- Subjects
- *
KERNEL functions , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
We present a new barrier function, based on a kernel function with a linear growth term and an inverse linear barrier term. Existing kernel functions have a quadratic (or higher degree) growth term, and a barrier term that is either transcendent (e.g. logarithmic) or of a more complicated algebraic form. So the new kernel function has the simplest possible form compared with all existing kernel functions. It is shown that a primal-dual interior-point algorithm for linear optimization (LO) based on the new kernel function has the complexity bounds O( n ) log( n /ℇ) and O(√ n ) log( n /ℇ) for large- and small-update methods, respectively. These complexity bounds are the same as those for the classical algorithm based on the logarithmic barrier function. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
48. New Complexity Analysis of the Primal-Dual Newton Method for Linear Optimization.
- Author
-
Peng, J., Roos, C., and Terlaky, T.
- Subjects
NEWTON-Raphson method ,ITERATIVE methods (Mathematics) ,ALGORITHMS ,MATHEMATICAL optimization ,LINEAR statistical models ,OPERATIONS research - Abstract
We deal with the primal-dual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for so-called large-update methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of large-update methods before. The new analysis does not improve the known complexity results but provides a unified way for the analysis of both large-update and small-update methods. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
49. Foundations of Discrete Optimization: In Transition from Linear to Non-linear Models and Methods
- Author
-
De Loera, Jesús A., Hemmecke, Raymond, and Köppe, Matthias
- Published
- 2012
- Full Text
- View/download PDF
50. A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems.
- Author
-
Febres, Gerardo L.
- Subjects
- *
DETERMINISTIC algorithms , *LINEAR equations , *LINEAR systems , *VECTOR spaces , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
This document introduces a method to solve linear optimization problems. The method's strategy is based on the bounding condition that each constraint exerts over the dimensions of the problem. The solution of a linear optimization problem is at the intersection of the constraints defining the extreme vertex. The method decomposes the n-dimensional linear problem into n-1 two-dimensional problems. After studying the role of constraints in these two-dimensional problems, we identify the constraints intersecting at the extreme vertex. We then formulate a linear equation system that directly leads to the solution of the optimization problem. The algorithm is remarkably different from previously existing linear programming algorithms in the sense that it does not iterate; it is deterministic. A fully c-sharp-coded algorithm is made available. We believe this algorithm and the methods applied for classifying constraints according to their role open up a useful framework for studying complex linear problems through feasible-space and constraint analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.