316 results on '"constrained optimization"'
Search Results
2. Constrained optimization of rank-one functions with indicator variables.
- Author
-
Shafiee, Soroosh and Kılınç-Karzan, Fatma
- Subjects
- *
CONSTRAINED optimization , *LOGISTIC regression analysis , *MACHINE learning , *INDEPENDENT variables , *CONTINUOUS functions , *MATHEMATICAL reformulation - Abstract
Optimization problems involving minimization of a rank-one convex function over constraints modeling restrictions on the support of the decision variables emerge in various machine learning applications. These problems are often modeled with indicator variables for identifying the support of the continuous variables. In this paper we investigate compact extended formulations for such problems through perspective reformulation techniques. In contrast to the majority of previous work that relies on support function arguments and disjunctive programming techniques to provide convex hull results, we propose a constructive approach that exploits a hidden conic structure induced by perspective functions. To this end, we first establish a convex hull result for a general conic mixed-binary set in which each conic constraint involves a linear function of independent continuous variables and a set of binary variables. We then demonstrate that extended representations of sets associated with epigraphs of rank-one convex functions over constraints modeling indicator relations naturally admit such a conic representation. This enables us to systematically give perspective formulations for the convex hull descriptions of these sets with nonlinear separable or non-separable objective functions, sign constraints on continuous variables, and combinatorial constraints on indicator variables. We illustrate the efficacy of our results on sparse nonnegative logistic regression problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Short-step methods are not strongly polynomial-time.
- Author
-
Zong, Manru, Lee, Yin Tat, and Yue, Man-Chung
- Subjects
- *
CONSTRAINED optimization , *ALGORITHMS - Abstract
Short-step methods are an important class of algorithms for solving convex constrained optimization problems. In this short paper, we show that under very mild assumptions on the self-concordant barrier and the width of the ℓ 2 -neighbourhood, any short-step interior-point method is not strongly polynomial-time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods.
- Author
-
Das Gupta, Shuvomoy, Van Parys, Bart P. G., and Ryu, Ernest K.
- Subjects
- *
PROBLEM solving , *CONSTRAINED optimization , *ALGORITHMS - Abstract
We present the Branch-and-Bound Performance Estimation Programming (BnB-PEP), a unified methodology for constructing optimal first-order methods for convex and nonconvex optimization. BnB-PEP poses the problem of finding the optimal optimization method as a nonconvex but practically tractable quadratically constrained quadratic optimization problem and solves it to certifiable global optimality using a customized branch-and-bound algorithm. By directly confronting the nonconvexity, BnB-PEP offers significantly more flexibility and removes the many limitations of the prior methodologies. Our customized branch-and-bound algorithm, through exploiting specific problem structures, outperforms the latest off-the-shelf implementations by orders of magnitude, accelerating the solution time from hours to seconds and weeks to minutes. We apply BnB-PEP to several setups for which the prior methodologies do not apply and obtain methods with bounds that improve upon prior state-of-the-art results. Finally, we use the BnB-PEP methodology to find proofs with potential function structures, thereby systematically generating analytical convergence proofs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Constrained composite optimization and augmented Lagrangian methods.
- Author
-
De Marchi, Alberto, Jia, Xiaoxi, Kanzow, Christian, and Mehlitz, Patrick
- Subjects
- *
CONSTRAINED optimization , *NONSMOOTH optimization , *ALGORITHMS - Abstract
We investigate finite-dimensional constrained structured optimization problems, featuring composite objective functions and set-membership constraints. Offering an expressive yet simple language, this problem class provides a modeling framework for a variety of applications. We study stationarity and regularity concepts, and propose a flexible augmented Lagrangian scheme. We provide a theoretical characterization of the algorithm and its asymptotic properties, deriving convergence results for fully nonconvex problems. It is demonstrated how the inner subproblems can be solved by off-the-shelf proximal methods, notwithstanding the possibility to adopt any solvers, insofar as they return approximate stationary points. Finally, we describe our matrix-free implementation of the proposed algorithm and test it numerically. Illustrative examples show the versatility of constrained composite programs as a modeling tool and expose difficulties arising in this vast problem class. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. The augmented Lagrangian method can approximately solve convex optimization with least constraint violation.
- Author
-
Dai, Yu-Hong and Zhang, Liwei
- Subjects
- *
SET functions , *NONLINEAR equations , *VALUATION of real property , *POINT set theory , *CONSTRAINED optimization , *DIFFERENTIAL inclusions - Abstract
There are many important practical optimization problems whose feasible regions are not known to be nonempty or not, and optimizers of the objective function with the least constraint violation prefer to be found. A natural way for dealing with these problems is to extend the nonlinear optimization problem as the one optimizing the objective function over the set of points with the least constraint violation. This leads to the study of the shifted problem. This paper focuses on the constrained convex optimization problem. The sufficient condition for the closedness of the set of feasible shifts is presented and the continuity properties of the optimal value function and the solution mapping for the shifted problem are studied. Properties of the conjugate dual of the shifted problem are discussed through the relations between the dual function and the optimal value function. The solvability of the dual of the optimization problem with the least constraint violation is investigated. It is shown that, if the least violated shift is in the domain of the subdifferential of the optimal value function, then this dual problem has an unbounded solution set. Under this condition, the optimality conditions for the problem with the least constraint violation are established in term of the augmented Lagrangian. It is shown that the augmented Lagrangian method has the properties that the sequence of shifts converges to the least violated shift and the sequence of multipliers is unbounded. Moreover, it is proved that the augmented Lagrangian method is able to find an approximate solution to the problem with the least constraint violation and it has linear rate of convergence under an error bound condition. The augmented Lagrangian method is applied to an illustrative convex second-order cone constrained optimization problem with least constraint violation and numerical results verify our theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iterates.
- Author
-
Boţ, Radu Ioan, Csetnek, Ernö Robert, and Nguyen, Dang-Khoa
- Subjects
- *
DIFFERENTIAL equations , *DYNAMICAL systems , *CONTINUOUS functions , *DIFFERENTIABLE functions , *CONVEX functions , *CONSTRAINED optimization - Abstract
This work aims to minimize a continuously differentiable convex function with Lipschitz continuous gradient under linear equality constraints. The proposed inertial algorithm results from the discretization of the second-order primal-dual dynamical system with asymptotically vanishing damping term addressed by Boţ and Nguyen (J. Differential Equations 303:369–406, 2021), and it is formulated in terms of the Augmented Lagrangian associated with the minimization problem. The general setting we consider for the inertial parameters covers the three classical rules by Nesterov, Chambolle–Dossal and Attouch–Cabot used in the literature to formulate fast gradient methods. For these rules, we obtain in the convex regime convergence rates of order O 1 / k 2 for the primal-dual gap, the feasibility measure, and the objective function value. In addition, we prove that the generated sequence of primal-dual iterates converges to a primal-dual solution in a general setting that covers the two latter rules. This is the first result which provides the convergence of the sequence of iterates generated by a fast algorithm for linearly constrained convex optimization problems without additional assumptions such as strong convexity. We also emphasize that all convergence results of this paper are compatible with the ones obtained in Boţ and Nguyen (J. Differential Equations 303:369–406, 2021) in the continuous setting. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Probability maximization via Minkowski functionals: convex representations and tractable resolution.
- Author
-
Bardakci, I. E., Jalilzadeh, A., Lagoa, C., and Shanbhag, U. V.
- Subjects
- *
INTEGER approximations , *FUNCTIONALS , *STOCHASTIC approximation , *SMOOTHNESS of functions , *PROBABILITY theory , *CONSTRAINED optimization , *CONVEX sets - Abstract
In this paper, we consider the maximizing of the probability P ζ ∣ ζ ∈ K (x) over a closed and convex set X , a special case of the chance-constrained optimization problem. Suppose K (x) ≜ ζ ∈ K ∣ c (x , ζ) ≥ 0 , and ζ is uniformly distributed on a convex and compact set K and c (x , ζ) is defined as either c (x , ζ) ≜ 1 - ζ T x m where m ≥ 0 (Setting A) or c (x , ζ) ≜ T x - ζ (Setting B). We show that in either setting, by leveraging recent findings in the context of non-Gaussian integrals of positively homogenous functions, P ζ ∣ ζ ∈ K (x) can be expressed as the expectation of a suitably defined continuous function F (∙ , ξ) with respect to an appropriately defined Gaussian density (or its variant), i.e. E p ~ F (x , ξ) . Aided by a recent observation in convex analysis, we then develop a convex representation of the original problem requiring the minimization of g E F (∙ , ξ) over X , where g is an appropriately defined smooth convex function. Traditional stochastic approximation schemes cannot contend with the minimization of g E F (∙ , ξ) over X , since conditionally unbiased sampled gradients are unavailable. We then develop a regularized variance-reduced stochastic approximation (r-VRSA) scheme that obviates the need for such unbiasedness by combining iterative regularization with variance-reduction. Notably, (r-VRSA) is characterized by almost-sure convergence guarantees, a convergence rate of O (1 / k 1 / 2 - a) in expected sub-optimality where a > 0 , and a sample complexity of O (1 / ϵ 6 + δ) where δ > 0 . To the best of our knowledge, this may be the first such scheme for probability maximization problems with convergence and rate guarantees. Preliminary numerics on a portfolio selection problem (Setting A) and a set-covering problem (Setting B) suggest that the scheme competes well with naive mini-batch SA schemes as well as integer programming approximation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Unifying mirror descent and dual averaging.
- Author
-
Juditsky, Anatoli, Kwon, Joon, and Moulines, Éric
- Subjects
- *
OPTIMIZATION algorithms , *CONSTRAINED optimization , *MIRRORS - Abstract
We introduce and analyze a new family of first-order optimization algorithms which generalizes and unifies both mirror descent and dual averaging. Within the framework of this family, we define new algorithms for constrained optimization that combines the advantages of mirror descent and dual averaging. Our preliminary simulation study shows that these new algorithms significantly outperform available methods in some situations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Riemannian Optimization via Frank-Wolfe Methods.
- Author
-
Weber, Melanie and Sra, Suvrit
- Subjects
- *
CONSTRAINED optimization , *MATRICES (Mathematics) , *GEODESICS , *RIEMANNIAN manifolds - Abstract
We study projection-free methods for constrained Riemannian optimization. In particular, we propose a Riemannian Frank-Wolfe (RFW) method that handles constraints directly, in contrast to prior methods that rely on (potentially costly) projections. We analyze non-asymptotic convergence rates of RFW to an optimum for geodesically convex problems, and to a critical point for nonconvex objectives. We also present a practical setting under which RFW can attain a linear convergence rate. As a concrete example, we specialize RFW to the manifold of positive definite matrices and apply it to two tasks: (i) computing the matrix geometric mean (Riemannian centroid); and (ii) computing the Bures-Wasserstein barycenter. Both tasks involve geodesically convex interval constraints, for which we show that the Riemannian "linear" oracle required by RFW admits a closed form solution; this result may be of independent interest. We complement our theoretical results with an empirical comparison of RFW against state-of-the-art Riemannian optimization methods, and observe that RFW performs competitively on the task of computing Riemannian centroids. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings.
- Author
-
Cartis, Coralia, Massart, Estelle, and Otemissov, Adilet
- Subjects
- *
GLOBAL optimization , *CONSTRAINED optimization , *SPECIAL functions , *MATHEMATICAL optimization , *PROBLEM solving , *SCALABILITY - Abstract
We consider the bound-constrained global optimization of functions with low effective dimensionality, that are constant along an (unknown) linear subspace and only vary over the effective (complement) subspace. We aim to implicitly explore the intrinsic low dimensionality of the constrained landscape using feasible random embeddings, in order to understand and improve the scalability of algorithms for the global optimization of these special-structure problems. A reduced subproblem formulation is investigated that solves the original problem over a random low-dimensional subspace subject to affine constraints, so as to preserve feasibility with respect to the given domain. Under reasonable assumptions, we show that the probability that the reduced problem is successful in solving the original, full-dimensional problem is positive. Furthermore, in the case when the objective's effective subspace is aligned with the coordinate axes, we provide an asymptotic bound on this success probability that captures its polynomial dependence on the effective and, surprisingly, ambient dimensions. We then propose X-REGO, a generic algorithmic framework that uses multiple random embeddings, solving the above reduced problem repeatedly, approximately and possibly, adaptively. Using the success probability of the reduced subproblems, we prove that X-REGO converges globally, with probability one, and linearly in the number of embeddings, to an ϵ -neighbourhood of a constrained global minimizer. Our numerical experiments on special structure functions illustrate our theoretical findings and the improved scalability of X-REGO variants when coupled with state-of-the-art global—and even local—optimization solvers for the subproblems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Stochastic first-order methods for convex and nonconvex functional constrained optimization.
- Author
-
Boob, Digvijay, Deng, Qi, and Lan, Guanghui
- Subjects
- *
CONSTRAINED optimization , *OPERATIONS research , *INTERIOR-point methods , *ROBUST optimization , *MACHINE learning , *SUPERVISED learning , *EXTRAPOLATION - Abstract
Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in risk-averse machine learning, semisupervised learning and robust optimization among others. In this paper, we first present a novel Constraint Extrapolation (ConEx) method for solving convex functional constrained problems, which utilizes linear approximations of the constraint functions to define the extrapolation (or acceleration) step. We show that this method is a unified algorithm that achieves the best-known rate of convergence for solving different functional constrained convex composite problems, including convex or strongly convex, and smooth or nonsmooth problems with stochastic objective and/or stochastic constraints. Many of these rates of convergence were in fact obtained for the first time in the literature. In addition, ConEx is a single-loop algorithm that does not involve any penalty subproblems. Contrary to existing primal-dual methods, it does not require the projection of Lagrangian multipliers into a (possibly unknown) bounded set. Second, for nonconvex functional constrained problems, we introduce a new proximal point method which transforms the initial nonconvex problem into a sequence of convex problems by adding quadratic terms to both the objective and constraints. Under certain MFCQ-type assumption, we establish the convergence and rate of convergence of this method to KKT points when the convex subproblems are solved exactly or inexactly. For large-scale and stochastic problems, we present a more practical proximal point method in which the approximate solutions of the subproblems are computed by the aforementioned ConEx method. Under a strong feasibility assumption, we establish the total iteration complexity of ConEx required by this inexact proximal point method for a variety of problem settings, including nonconvex smooth or nonsmooth problems with stochastic objective and/or stochastic constraints. To the best of our knowledge, most of these convergence and complexity results of the proximal point method for nonconvex problems also seem to be new in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. An outer-approximation guided optimization approach for constrained neural network inverse problems.
- Author
-
Cheon, Myun-Seok
- Subjects
- *
INVERSE problems , *CONVEX domains , *CONSTRAINED optimization , *APPROXIMATION algorithms - Abstract
This paper discusses an outer-approximation guided optimization method for constrained neural network inverse problems with rectified linear units. The constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network in order to produce a predefined desired output in presence of constraints on input values. This paper analyzes the characteristics of optimal solutions of neural network inverse problems with rectified activation units and proposes an outer-approximation algorithm by exploiting their characteristics. The proposed outer-approximation guided optimization comprises primal and dual phases. The primal phase incorporates neighbor curvatures with neighbor outer-approximations to expedite the process. The dual phase identifies and utilizes the structure of local convex regions to improve the convergence to a local optimal solution. At last, computation experiments demonstrate the superiority of the proposed algorithm compared to a projected gradient method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Semidefinite programming hierarchies for constrained bilinear optimization.
- Author
-
Berta, Mario, Borderi, Francesco, Fawzi, Omar, and Scholz, Volkher B.
- Subjects
- *
ERROR correction (Information theory) , *SEMIDEFINITE programming , *CONSTRAINED optimization , *CHANNEL coding , *BILINEAR forms - Abstract
We give asymptotically converging semidefinite programming hierarchies of outer bounds on bilinear programs of the form Tr [ H (D ⊗ E) ] , maximized with respect to semidefinite constraints on D and E. Applied to the problem of approximate error correction in quantum information theory, this gives hierarchies of efficiently computable outer bounds on the success probability of approximate quantum error correction codes in any dimension. The first level of our hierarchies corresponds to a previously studied relaxation (Leung and Matthews in IEEE Trans Inf Theory 61(8):4486, 2015) and positive partial transpose constraints can be added to give a sufficient criterion for the exact convergence at a given level of the hierarchy. To quantify the worst case convergence speed of our sum-of-squares hierarchies, we derive novel quantum de Finetti theorems that allow imposing linear constraints on the approximating state. In particular, we give finite de Finetti theorems for quantum channels, quantifying closeness to the convex hull of product channels as well as closeness to local operations and classical forward communication assisted channels. As a special case this constitutes a finite version of Fuchs-Schack-Scudo's asymptotic de Finetti theorem for quantum channels. Finally, our proof methods answer a question of Brandão and Harrow (Proceedings of the forty-fourth annual ACM symposium on theory of computing, STOC'12, p 307, 2012) by improving the approximation factor of de Finetti theorems with no symmetry from O (d k / 2) to poly (d , k) , where d denotes local dimension and k the number of copies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. A primal–dual algorithm for risk minimization.
- Author
-
Kouri, Drew P. and Surowiec, Thomas M.
- Subjects
- *
NONSMOOTH optimization , *NUMERICAL functions , *PARTIAL differential equations , *CONSTRAINED optimization , *ALGORITHMS , *BANACH spaces - Abstract
In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical applications as, e.g., optimization problems constrained by partial differential equations with uncertain inputs. Unfortunately, for many popular risk models including the coherent risk measures, the resulting risk-averse objective function is nonsmooth. This lack of differentiability complicates the numerical approximation of the objective function as well as the numerical solution of the optimization problem. To address these challenges, we propose a primal–dual algorithm for solving large-scale nonsmooth risk-averse optimization problems. This algorithm is motivated by the classical method of multipliers and by epigraphical regularization of risk measures. As a result, the algorithm solves a sequence of smooth optimization problems using derivative-based methods. We prove convergence of the algorithm even when the subproblems are solved inexactly and conclude with numerical examples demonstrating the efficiency of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Existence of efficient and properly efficient solutions to problems of constrained vector optimization.
- Author
-
Kim, Do Sang, Mordukhovich, Boris S., Phạm, Tiến-Sơn, and Van Tuyen, Nguyen
- Subjects
- *
CONSTRAINED optimization , *VECTOR data , *EXISTENCE theorems - Abstract
The paper is devoted to the existence of global optimal solutions for a general class of nonsmooth problems of constrained vector optimization without boundedness assumptions on constraint set. The main attention is paid to the two major notions of optimality in vector problems: Pareto efficiency and proper efficiency in the sense of Geoffrion. Employing adequate tools of variational analysis and generalized differentiation, we first establish relationships between the notions of properness, M-tameness, and the Palais–Smale conditions formulated for the restriction of the vector cost mapping on the constraint set. These results are instrumental to derive verifiable necessary and sufficient conditions for the existence of Pareto efficient solutions in vector optimization. Furthermore, the developed approach allows us to obtain new sufficient conditions for the existence of Geoffrion-properly efficient solutions to such constrained vector problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Characterizing quasiconvexity of the pointwise infimum of a family of arbitrary translations of quasiconvex functions, with applications to sums and quasiconvex optimization.
- Author
-
Flores-Bazán, F., García, Y., and Hadjisavvas, N.
- Subjects
- *
CONSTRAINED optimization , *CONVEXITY spaces , *CONVEX functions - Abstract
It is well-known that the sum of two quasiconvex functions is not quasiconvex in general, and the same occurs with the minimum. Although apparently these two statements (for the sum or minimum) have nothing in common, they are related, as we show in this paper. To develop our study, the notion of quasiconvex family is introduced, and we establish various characterizations of such a concept: one of them being the quasiconvexity of the pointwise infimum of arbitrary translations of quasiconvex functions in the family; another is the convexity of the union of any two of their sublevel sets; a third one is the quasiconvexity of the sum of the quasiconvex functions, composed with arbitrary nondecreasing functions. As a by-product, any of the aforementioned characterizations, besides providing quasiconvexity of the sum, also implies the semistrict quasiconvexity of the sum if every function in the family has the same property. Three concrete applications in quasiconvex optimization are presented: First, we establish the convexity of the (Benson) proper efficient solution set to a quasiconvex vector optimization problem; second, we derive conditions that allow us to reduce a constrained optimization problem to one with a single inequality constraint, and finally, we show a class of quasiconvex minimization problems having zero duality gap. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. On distributionally robust chance constrained programs with Wasserstein distance.
- Author
-
Xie, Weijun
- Subjects
- *
CONSTRAINED optimization , *DISTRIBUTION (Probability theory) , *DISTANCES , *VALUE at risk , *INTEGERS - Abstract
This paper studies a distributionally robust chance constrained program (DRCCP) with Wasserstein ambiguity set, where the uncertain constraints should be satisfied with a probability at least a given threshold for all the probability distributions of the uncertain parameters within a chosen Wasserstein distance from an empirical distribution. In this work, we investigate equivalent reformulations and approximations of such problems. We first show that a DRCCP can be reformulated as a conditional value-at-risk constrained optimization problem, and thus admits tight inner and outer approximations. We also show that a DRCCP of bounded feasible region is mixed integer representable by introducing big-M coefficients and additional binary variables. For a DRCCP with pure binary decision variables, by exploring the submodular structure, we show that it admits a big-M free formulation, which can be solved by a branch and cut algorithm. Finally, we present a numerical study to illustrate the effectiveness of the proposed formulations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming.
- Author
-
Xu, Yangyang
- Subjects
- *
QUADRATIC programming , *CONSTRAINED optimization , *NONLINEAR equations , *INTERIOR-point methods , *CONVEX programming - Abstract
Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of the inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of the inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a primal and/or primal-dual solution with a specified accuracy. We first establish an ergodic convergence rate result of the inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we then apply Nesterov's optimal first-order method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then O (ε - 1) gradient evaluations are sufficient to guarantee a primal ε -solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to O (ε - 1 2 | log ε |) . To produce a primal-dual ε -solution, more gradient evaluations are needed for convex case, and the number is O (ε - 4 3 ) , while for strongly convex case, the number is still O (ε - 1 2 | log ε |) . Finally, we establish a nonergodic convergence rate result of the inexact ALM that uses geometrically increasing penalty parameters. This result is established only for the primal problem. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary.
- Author
-
Haeser, Gabriel, Liu, Hongcheng, and Ye, Yinyu
- Subjects
- *
NONCONVEX programming , *NONLINEAR programming , *NONLINEAR equations , *CONTINUOUS functions , *INTERIOR-point methods - Abstract
In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality conditions for this problem that reduces to classical ones when the derivative on the boundary is available. For this type of problems, existing necessary conditions often rely on the notion of subdifferential or become non-trivially weaker than the KKT condition in the (twice-)differentiable counterpart problems. In contrast, this paper presents a new set of first- and second-order necessary conditions that are derived without the use of subdifferential and reduce to exactly the KKT condition when (twice-)differentiability holds. As a result, these conditions are stronger than some existing ones considered for the discussed minimization problem when only non-negativity constraints are present. To solve for these optimality conditions in the special but important case of linearly constrained problems, we present two novel interior point trust-region algorithms and show that their worst-case computational efficiency in achieving the potentially stronger optimality conditions match the best known complexity bounds. Since this work considers a more general problem than those in the literature, our results also indicate that best known existing complexity bounds are actually held for a wider class of nonlinear programming problems. This new development is significant since optimality conditions play a fundamental role in computational optimization and more and more nonlinear and nonconvex problems need to be solved in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Improved local convergence results for augmented Lagrangian methods in C2-cone reducible constrained optimization.
- Author
-
Kanzow, Christian and Steck, Daniel
- Subjects
- *
SEMIDEFINITE programming , *NONLINEAR programming , *CONSTRAINED optimization , *CONES - Abstract
This paper deals with a class of cone-reducible constrained optimization problems which encompasses nonlinear programming, semidefinite programming, second-order cone programming, and any combination thereof. Using the second-order sufficient condition and a strict version of the Robinson constraint qualification, we provide a (semi-)local error bound which generalizes known results from the literature. Moreover, under the same assumptions, we prove that an augmented Lagrangian method is locally convergent with rate proportional to 1 / ρ k , where ρ k is the penalty parameter, and that { ρ k } remains bounded. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Perturbed proximal primal–dual algorithm for nonconvex nonsmooth optimization.
- Author
-
Hajinezhad, Davood and Hong, Mingyi
- Subjects
- *
NONSMOOTH optimization , *CONSTRAINED optimization , *ALGORITHMS , *MACHINE learning - Abstract
In this paper, we propose a perturbed proximal primal–dual algorithm (PProx-PDA) for an important class of linearly constrained optimization problems, whose objective is the sum of smooth (possibly nonconvex) and convex (possibly nonsmooth) functions. This family of problems can be used to model many statistical and engineering applications, such as high-dimensional subspace estimation and the distributed machine learning. The proposed method is of the Uzawa type, in which a primal gradient descent step is performed followed by an (approximate) dual gradient ascent step. One distinctive feature of the proposed algorithm is that the primal and dual steps are both perturbed appropriately using past iterates so that a number of asymptotic convergence and rate of convergence results (to first-order stationary solutions) can be obtained. Finally, we conduct extensive numerical experiments to validate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Distributed nonconvex constrained optimization over time-varying digraphs.
- Author
-
Scutari, Gesualdo and Sun, Ying
- Subjects
- *
CONSTRAINED optimization , *STATISTICAL learning , *NONSMOOTH optimization , *MACHINE learning , *DESIGN techniques , *ALGORITHMS - Abstract
This paper considers nonconvex distributed constrained optimization over networks, modeled as directed (possibly time-varying) graphs. We introduce the first algorithmic framework for the minimization of the sum of a smooth nonconvex (nonseparable) function—the agent's sum-utility—plus a difference-of-convex function (with nonsmooth convex part). This general formulation arises in many applications, from statistical machine learning to engineering. The proposed distributed method combines successive convex approximation techniques with a judiciously designed perturbed push-sum consensus mechanism that aims to track locally the gradient of the (smooth part of the) sum-utility. Sublinear convergence rate is proved when a fixed step-size (possibly different among the agents) is employed whereas asymptotic convergence to stationary solutions is proved using a diminishing step-size. Numerical results show that our algorithms compare favorably with current schemes on both convex and nonconvex problems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Approximations and solution estimates in optimization.
- Author
-
Royset, Johannes O.
- Subjects
- *
MATHEMATICAL optimization , *APPROXIMATION theory , *CONSTRAINED optimization , *CONTINUOUS functions , *METRIC spaces , *STOCHASTIC convergence - Abstract
Approximation is central to many optimization problems and the supporting theory provides insight as well as foundation for algorithms. In this paper, we lay out a broad framework for quantifying approximations by viewing finite- and infinite-dimensional constrained minimization problems as instances of extended real-valued lower semicontinuous functions defined on a general metric space. Since the Attouch-Wets distance between such functions quantifies epi-convergence, we are able to obtain estimates of optimal solutions and optimal values through bounds of that distance. In particular, we show that near-optimal and near-feasible solutions are effectively Lipschitz continuous with modulus one in this distance. Under additional assumptions on the underlying metric space, we construct approximating functions involving only a finite number of parameters that still are close to an arbitrary extended real-valued lower semicontinuous functions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. On univariate function identification problems.
- Author
-
Royset, Johannes O. and Wets, Roger J.-B.
- Subjects
- *
CONSTRAINED optimization , *STOCHASTIC convergence , *TOPOLOGY , *NONPARAMETRIC statistics , *VARIOGRAMS - Abstract
Applications in the areas of data fitting, forecasting, and estimation naturally lead to a rich class of constrained infinite-dimensional optimization problems over extended real-valued semicontinuous functions. We discuss a framework for dealing with such applications, even in the presence of nearly arbitrary constraints on the functions. We formulate computationally tractable approximating problems relying on piecewise polynomial semicontinuous approximations of the actual functions. The approximations enable the study of evolving problems caused by incrementally arriving data and other information. In contrast to an earlier more general treatment, we focus on optimization over functions defined on a compact interval of the real line, which still addresses a large number of applications. The paper provides an introduction to the subject through simplified derivations and illustrative examples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. An interior-point trust-funnel algorithm for nonlinear optimization.
- Author
-
Curtis, Frank, Gould, Nicholas, Robinson, Daniel, and Toint, Philippe
- Subjects
- *
MATHEMATICAL optimization , *INTERIOR-point methods , *ALGORITHMS , *HESSIAN matrices , *ITERATIVE methods (Mathematics) - Abstract
We present an interior-point trust-funnel algorithm for solving large-scale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math Prog 122(1):155-196, 2010) that focused on solving equality constrained problems. Our method is similar in that it achieves global convergence guarantees by combining a trust-region methodology with a funnel mechanism, but has the additional capability of being able to solve problems with both equality and inequality constraints. The prominent features of our algorithm are that (i) the subproblems that define each search direction may be solved with matrix-free methods so that derivative matrices need not be formed or factorized so long as matrix-vector products with them can be performed; (ii) the subproblems may be solved approximately in all iterations; (iii) in certain situations, the computed search directions represent inexact sequential quadratic optimization steps, which may be desirable for fast local convergence; (iv) criticality measures for feasibility and optimality aid in determining whether only a subset of computations need to be performed during a given iteration; and (v) no merit function or filter is needed to ensure global convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. On the multiplier-penalty-approach for quasi-variational inequalities.
- Author
-
Kanzow, Christian
- Subjects
- *
CONSTRAINED optimization , *VARIATIONAL inequalities (Mathematics) , *LAGRANGIAN functions , *NASH equilibrium , *CONSTRAINT programming - Abstract
The multiplier-penalty approach is one of the classical methods for the solution of constrained optimization problems. This method was generalized to the solution of quasi-variational inequalities by Pang and Fukushima (Comput Manag Sci 2:21-56, 2005). Based on the recent improvements achieved for the multiplier-penalty approach for optimization, we generalize the method by Pang and Fukushima for quasi-variational inequalities in several respects: (a) We allow to compute inexact KKT-points of the resulting subproblems; (b) We improve the existing convergence theory; (c) We investigate some special classes of quasi-variational inequalities where the resulting subproblems turn out to be easy to solve. Some numerical results indicate that the corresponding method works quite reliable in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. Newton's method may fail to recognize proximity to optimal points in constrained optimization.
- Author
-
Andreani, R., Martínez, J., and Santos, L.
- Subjects
- *
CONSTRAINED optimization , *LAGRANGE multiplier , *NEWTON-Raphson method , *MATHEMATICAL optimization , *NONLINEAR programming - Abstract
We will show examples in which the primal sequence generated by the Newton-Lagrange method converges to a strict local minimizer of a constrained optimization problem but the gradient of the Lagrangian does not tend to zero, independently of the choice of the dual sequence. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. On conic QPCCs, conic QCQPs and completely positive programs.
- Author
-
Bai, Lijie, Mitchell, John, and Pang, Jong-Shi
- Subjects
- *
QUADRATIC programming , *COMPLEMENTARITY constraints (Mathematics) , *CONSTRAINED optimization , *NONCONVEX programming , *CONVEX functions - Abstract
This paper studies several classes of nonconvex optimization problems defined over convex cones, establishing connections between them and demonstrating that they can be equivalently formulated as convex completely positive programs. The problems being studied include: a conic quadratically constrained quadratic program (QCQP), a conic quadratic program with complementarity constraints (QPCC), and rank constrained semidefinite programs. Our results do not make any boundedness assumptions on the feasible regions of the various problems considered. The first stage in the reformulation is to cast the problem as a conic QCQP with just one nonconvex constraint $$q(x) \le 0$$ , where q( x) is nonnegative over the (convex) conic and linear constraints, so the problem fails the Slater constraint qualification. A conic QPCC has such a structure; we prove the converse, namely that any conic QCQP satisfying a constraint qualification can be expressed as an equivalent conic QPCC. The second stage of the reformulation lifts the problem to a completely positive program, and exploits and generalizes a result of Burer. We also show that a Frank-Wolfe type result holds for a subclass of this class of conic QCQPs. Further, we derive necessary and sufficient optimality conditions for nonlinear programs where the only nonconvex constraint is a quadratic constraint of the structure considered elsewhere in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
30. New analysis and results for the Frank-Wolfe method.
- Author
-
Freund, Robert and Grigas, Paul
- Subjects
- *
ALGORITHMS , *CONSTRAINED optimization , *ITERATIVE methods (Mathematics) , *DUALITY theory (Mathematics) , *APPROXIMATION theory - Abstract
We present new results for the Frank-Wolfe method (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-called FW gaps. Lastly, we present complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
31. Distributionally robust chance constraints for non-linear uncertainties.
- Author
-
Yang, Wenzhuo and Xu, Huan
- Subjects
- *
CONSTRAINED optimization , *NONLINEAR theories , *MATHEMATICAL variables , *MATHEMATICAL equivalence , *APPROXIMATION theory - Abstract
This paper investigates the computational aspects of distributionally robust chance constrained optimization problems. In contrast to previous research that mainly focused on the linear case (with a few exceptions discussed in detail below), we consider the case where the constraints can be non-linear to the decision variable, and in particular to the uncertain parameters. This formulation is of great interest as it can model non-linear uncertainties that are ubiquitous in applications. Our main result shows that distributionally robust chance constrained optimization is tractable, provided that the uncertainty is characterized by its mean and variance, and the constraint function is concave in the decision variables, and quasi-convex in the uncertain parameters. En route, we establish an equivalence relationship between distributionally robust chance constraint and the robust optimization framework that models uncertainty in a deterministic manner. This links two broadly applied paradigms in decision making under uncertainty and extends previous results of the same spirit in the linear case to more general cases. We then consider probabilistic envelope constraints, a generalization of distributionally robust chance constraints first proposed in Xu et al. (Oper Res 60:682-700, ) for the linear case. We extend this framework to the non-linear case, and derive sufficient conditions that guarantee its tractability. Finally, we investigate tractable approximations of joint probabilistic envelope constraints, and provide the conditions when these approximation formulations are tractable. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
32. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization.
- Author
-
Ghadimi, Saeed, Lan, Guanghui, and Zhang, Hongchao
- Subjects
- *
CONSTRAINED optimization , *STOCHASTIC processes , *ALGORITHMS , *COMPUTATIONAL complexity , *STOCHASTIC programming - Abstract
This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are taken at each iteration depending on the total budget of stochastic samples allowed. The RSPG algorithm also employs a general distance function to allow taking advantage of the geometry of the feasible region. Complexity of this algorithm is established in a unified setting, which shows nearly optimal complexity of the algorithm for convex stochastic programming. A post-optimization phase is also proposed to significantly reduce the variance of the solutions returned by the algorithm. In addition, based on the RSPG algorithm, a stochastic gradient free algorithm, which only uses the stochastic zeroth-order information, has been also discussed. Some preliminary numerical results are also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
33. Constraint aggregation for rigorous global optimization.
- Author
-
Domes, Ferenc and Neumaier, Arnold
- Subjects
- *
CONSTRAINED optimization , *APPROXIMATION theory , *LAGRANGE multiplier , *ALGORITHMS , *FACTORIZATION - Abstract
In rigorous constrained global optimization, upper bounds on the objective function help to reduce the search space. Obtaining a rigorous upper bound on the objective requires finding a narrow box around an approximately feasible solution, which then must be verified to contain a feasible point. Approximations are easily found by local optimization, but the verification often fails. In this paper we show that even when the verification of an approximate feasible point fails, the information extracted from the results of the local optimization can still be used in many cases to reduce the search space. This is done by a rigorous filtering technique called constraint aggregation. It forms an aggregated redundant constraint, based on approximate Lagrange multipliers or on a vector valued measure of constraint violation. Using the optimality conditions, two-sided linear relaxations, the Gauss-Jordan algorithm and a directed modified Cholesky factorization, the information in the redundant constraint is turned into powerful bounds on the feasible set. Constraint aggregation is especially useful since it also works in a tiny neighborhood of the global optimizer, thereby reducing the cluster effect. A simple introductory example demonstrates how our new method works. Extensive tests show the performance on a large benchmark. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. Distributed nonconvex constrained optimization over time-varying digraphs
- Author
-
Gesualdo Scutari and Ying Sun
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Sublinear function ,General Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Convergence (routing) ,FOS: Mathematics ,Computer Science - Multiagent Systems ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,021103 operations research ,Numerical analysis ,Regular polygon ,Constrained optimization ,Function (mathematics) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Rate of convergence ,Optimization and Control (math.OC) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Minification ,Software ,Multiagent Systems (cs.MA) - Abstract
This paper considers nonconvex distributed constrained optimization over networks, modeled as directed (possibly time-varying) graphs. We introduce the first algorithmic framework for the minimization of the sum of a smooth nonconvex (nonseparable) function--the agent's sum-utility--plus a Difference-of-Convex (DC) function (with nonsmooth convex part). This general formulation arises in many applications, from statistical machine learning to engineering. The proposed distributed method combines successive convex approximation techniques with a judiciously designed perturbed push-sum consensus mechanism that aims to track locally the gradient of the (smooth part of the) sum-utility. Sublinear convergence rate is proved when a fixed step-size (possibly different among the agents) is employed whereas asymptotic convergence to stationary solutions is proved using a diminishing step-size. Numerical results show that our algorithms compare favorably with current schemes on both convex and nonconvex problems., Comment: Submitted June 3, 2017, revised June 5, 2108. Part of this work has been presented at the 2016 Asilomar Conference on System, Signal and Computers and the 2017 IEEE ICASSP Conference
- Published
- 2019
35. An adaptive augmented Lagrangian method for large-scale constrained optimization.
- Author
-
Curtis, Frank, Jiang, Hao, and Robinson, Daniel
- Subjects
- *
LAGRANGIAN functions , *ADAPTIVE control systems , *LARGE scale systems , *CONSTRAINED optimization , *PROBLEM solving - Abstract
We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its ability to be implemented matrix-free. This is important as this feature of augmented Lagrangian methods is responsible for renewed interests in employing such methods for solving large-scale problems. We provide convergence results from remote starting points and illustrate by a set of numerical experiments that our method outperforms traditional augmented Lagrangian methods in terms of critical performance measures. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. The two-level diameter constrained spanning tree problem.
- Author
-
Gouveia, Luis, Leitner, Markus, and Ljubić, Ivana
- Subjects
- *
SPANNING trees , *CONSTRAINED optimization , *INTEGER programming , *GRAPH theory , *PARALLEL algorithms - Abstract
In this article, we introduce the two-level diameter constrained spanning tree problem (2-DMSTP), which generalizes the classical DMSTP by considering two sets of nodes with different latency requirements. We first observe that any feasible solution to the 2-DMSTP can be viewed as a DMST that contains a diameter constrained Steiner tree. This observation allows us to prove graph theoretical properties related to the centers of each tree which are then exploited to develop mixed integer programming formulations, valid inequalities, and symmetry breaking constraints. In particular, we propose a novel modeling approach based on a three-dimensional layered graph. In an extensive computational study we show that a branch-and-cut algorithm based on the latter model is highly effective in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
37. Dual consistent systems of linear inequalities and cardinality constrained polytopes.
- Author
-
Fujishige, Satoru and Maßberg, Jens
- Subjects
- *
LINEAR matrix inequalities , *POLYTOPES , *POLYHEDRA , *COMBINATORIAL optimization , *CARDINAL numbers , *CONSTRAINED optimization - Abstract
We introduce a concept of dual consistency of systems of linear inequalities with full generality. We show that a cardinality constrained polytope is represented by a certain system of linear inequalities if and only if the systems of linear inequalities associated with the cardinalities are dual consistent. Typical dual consistent systems of inequalities are those which describe polymatroids, generalized polymatroids, and dual greedy polyhedra with certain choice functions. We show that the systems of inequalities for cardinality-constrained ordinary bipartite matching polytopes are not dual consistent in general, and give additional inequalities to make them dual consistent. Moreover, we show that ordinary systems of inequalities for the cardinality-constrained (poly)matroid intersection are not dual consistent, which disproves a conjecture of Maurras, Spiegelberg, and Stephan about a linear representation of the cardinality-constrained polymatroid intersection. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. A Frank-Wolfe type theorem for nondegenerate polynomial programs.
- Author
-
Dinh, Si, Ha, Huy, and Pham, Tien
- Subjects
- *
POLYNOMIALS , *MATHEMATICAL programming , *EXISTENCE theorems , *CONSTRAINED optimization , *SET theory - Abstract
In this paper, we study the existence of optimal solutions to a constrained polynomial optimization problem. More precisely, let $$f_0$$ and $$f_1, \ldots , f_p :{\mathbb {R}}^n \rightarrow {\mathbb {R}}$$ be convenient polynomial functions, and let $$S := \{x \in {\mathbb {R}}^n \ : \ f_i(x) \le 0, i = 1, \ldots , p\} \ne \emptyset .$$ Under the assumption that the map $$(f_0, f_{1}, \ldots , f_{p}) :{\mathbb {R}}^n \rightarrow {\mathbb {R}}^{p + 1}$$ is non-degenerate at infinity, we show that if $$f_0$$ is bounded from below on $$S,$$ then $$f_0$$ attains its infimum on $$S.$$ [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. Distance majorization and its applications.
- Author
-
Chi, Eric, Zhou, Hua, and Lange, Kenneth
- Subjects
- *
ALGORITHMS , *MATHEMATICAL programming , *QUASI-Newton methods , *CONVEX programming , *MACHINE learning - Abstract
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
40. Improved local convergence results for augmented Lagrangian methods in $${\varvec{C}}^\mathbf{2}$$C2-cone reducible constrained optimization
- Author
-
Daniel Steck and Christian Kanzow
- Subjects
Discrete mathematics ,Semidefinite programming ,021103 operations research ,Augmented Lagrangian method ,General Mathematics ,Numerical analysis ,0211 other engineering and technologies ,Constrained optimization ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Local convergence ,Nonlinear programming ,Cone (topology) ,Bounded function ,0101 mathematics ,Software ,Mathematics - Abstract
This paper deals with a class of cone-reducible constrained optimization problems which encompasses nonlinear programming, semidefinite programming, second-order cone programming, and any combination thereof. Using the second-order sufficient condition and a strict version of the Robinson constraint qualification, we provide a (semi-)local error bound which generalizes known results from the literature. Moreover, under the same assumptions, we prove that an augmented Lagrangian method is locally convergent with rate proportional to $$1/\rho _k$$, where $$\rho _k$$ is the penalty parameter, and that $$\{\rho _k\}$$ remains bounded.
- Published
- 2018
41. Uniform boundedness of the inverse of a Jacobian matrix arising in regularized interior-point methods.
- Author
-
Armand, Paul and Benoist, Joël
- Subjects
- *
JACOBIAN matrices , *INVERSE problems , *MATHEMATICAL regularization , *INTERIOR-point methods , *NONLINEAR programming , *STOCHASTIC convergence , *CONSTRAINED optimization - Abstract
This short communication analyses a boundedness property of the inverse of a Jacobian matrix that arises in regularized primal-dual interior-point methods for linear and nonlinear programming. This result should be a useful tool for the convergence analysis of these kinds of methods. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. Global optimization of mixed-integer quadratically-constrained quadratic programs (MIQCQP) through piecewise-linear and edge-concave relaxations.
- Author
-
Misener, Ruth and Floudas, Christodoulos
- Subjects
- *
MATHEMATICAL optimization , *INTEGERS , *QUADRATIC programming , *CONSTRAINED optimization , *LINEAR systems , *GENERALIZATION , *COMBINATORICS , *MATHEMATICAL bounds - Abstract
We propose a deterministic global optimization approach, whose novel contributions are rooted in the edge-concave and piecewise-linear underestimators, to address nonconvex mixed-integer quadratically-constrained quadratic programs (MIQCQP) to $${\epsilon}$$ -global optimality. The facets of low-dimensional ( n ≤ 3) edge-concave aggregations dominating the termwise relaxation of MIQCQP are introduced at every node of a branch-and-bound tree. Concave multivariable terms and sparsely distributed bilinear terms that do not participate in connected edge-concave aggregations are addressed through piecewise-linear relaxations. Extensive computational studies are presented for point packing problems, standard and generalized pooling problems, and examples from GLOBALLib (Meeraus, Globallib. ). [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
43. Stabilized SQP revisited.
- Author
-
Izmailov, A. and Solodov, M.
- Subjects
- *
QUADRATIC programming , *ALGORITHMS , *SEQUENTIAL analysis , *STOCHASTIC convergence , *LAGRANGE multiplier , *MATHEMATICAL variables - Abstract
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91-124, 2002), the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the Karush-Kuhn-Tucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primal-dual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to Fernández and Solodov (Math Program 125:47-73, 2010), both of these properties are ensured by the second-order sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the second-order necessary condition for optimality, and solvability of sSQP subproblems. Moreover, for the problem with equality constraints only, we prove superlinear convergence of sSQP under the assumption that the dual starting point is close to a noncritical multiplier. Since noncritical multipliers include all those satisfying SOSC but are not limited to them, we believe this gives the first superlinear convergence result for any Newtonian method for constrained optimization under assumptions that do not include any constraint qualifications and are weaker than SOSC. In the general case when inequality constraints are present, we show that such a relaxation of assumptions is not possible. We also consider applying sSQP to the problem where inequality constraints are reformulated into equalities using slack variables, and discuss the assumptions needed for convergence in this approach. We conclude with consequences for local regularization methods proposed in (Izmailov and Solodov SIAM J Optim 16:210-228, 2004; Wright SIAM J. Optim. 15:673-676, 2005). In particular, we show that these methods are still locally superlinearly convergent under the noncritical multiplier assumption, weaker than SOSC employed originally. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
44. On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-Newton SQP methods to critical multipliers.
- Author
-
Izmailov, A. F. and Solodov, M. V.
- Subjects
- *
CONSTRAINED optimization , *NEWTON-Raphson method , *LAGRANGE equations , *MULTIPLIERS (Mathematical analysis) , *STOCHASTIC convergence , *MATHEMATICAL programming - Abstract
It has been previously demonstrated that in the case when a Lagrange multiplier associated to a given solution is not unique, Newton iterations [e.g., those of sequential quadratic programming (SQP)] have a tendency to converge to special multipliers, called critical multipliers (when such critical multipliers exist). This fact is of importance because critical multipliers violate the second-order sufficient optimality conditions, and this was shown to be the reason for slow convergence typically observed for problems with degenerate constraints (convergence to noncritical multipliers results in superlinear rate despite degeneracy). Some theoretical and numerical validation of this phenomenon can be found in Izmailov and Solodov (Comput Optim Appl 42:231-264, 2009; Math Program 117:271-304, 2009). However, previous studies concerned the basic forms of Newton iterations. The question remained whether the attraction phenomenon still persists for relevant modifications, as well as in professional implementations. In this paper, we answer this question in the affirmative by presenting numerical results for the well known MINOS and SNOPT software packages applied to a collection of degenerate problems. We also extend previous theoretical considerations to the linearly constrained Lagrangian methods and to the quasi-Newton SQP, on which MINOS and SNOPT are based. Experiments also show that in the stabilized version of SQP the attraction phenomenon still exists but appears less persistent. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
45. Convex relaxations of non-convex mixed integer quadratically constrained programs: extended formulations.
- Author
-
Saxena, Anureet, Bonami, Pierre, and Lee, Jon
- Subjects
- *
RELAXATION methods (Mathematics) , *INTEGER programming , *QUADRATIC programming , *CONSTRAINED optimization , *MATHEMATICAL variables , *MATHEMATICAL inequalities , *EQUATIONS , *ALGORITHMS , *MATHEMATICAL analysis - Abstract
This paper addresses the problem of generating strong convex relaxations of Mixed Integer Quadratically Constrained Programming (MIQCP) problems. MIQCP problems are very difficult because they combine two kinds of non- convexities: integer variables and non-convex quadratic constraints. To produce strong relaxations of MIQCP problems, we use techniques from disjunctive programming and the lift-and-project methodology. In particular, we propose new methods for generating valid inequalities from the equation Y = x x T. We use the non-convex constraint $${ Y - x x^T \preccurlyeq 0}$$ to derive disjunctions of two types. The first ones are directly derived from the eigenvectors of the matrix Y − x x T with positive eigenvalues, the second type of disjunctions are obtained by combining several eigenvectors in order to minimize the width of the disjunction. We also use the convex SDP constraint $${ Y - x x^T \succcurlyeq 0}$$ to derive convex quadratic cuts, and we combine both approaches in a cutting plane algorithm. We present computational results to illustrate our findings. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
46. An inexact Newton method for nonconvex equality constrained optimization.
- Author
-
Byrd, Richard H., Curtis, Frank E., and Nocedal, Jorge
- Subjects
- *
CONSTRAINED optimization , *CONVEX sets , *MATHEMATICAL optimization , *QUADRATIC programming , *SET theory , *NONLINEAR programming - Abstract
We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the presence of negative curvature without requiring information about the inertia of the primal–dual iteration matrix. Negative curvature may arise from second-order information of the problem functions, but in fact exact second derivatives are not required in the approach. The complete algorithm is characterized by its emphasis on sufficient reductions in a model of an exact penalty function. We analyze the global behavior of the algorithm and present numerical results on a collection of test problems. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
47. Relative Pareto minimizers for multiobjective problems: existence and optimality conditions.
- Author
-
Bao, Truong Q. and Mordukhovich, Boris S.
- Subjects
- *
PARETO analysis , *CONSTRAINED optimization , *TOPOLOGICAL spaces , *PARETO principle , *CALCULUS of variations , *MATHEMATICAL optimization - Abstract
In this paper we introduce and study enhanced notions of relative Pareto minimizers for constrained multiobjective problems that are defined via several kinds of relative interiors of ordering cones and occupy intermediate positions between the classical notions of Pareto and weak Pareto efficiency/minimality. Using advanced tools of variational analysis and generalized differentiation, we establish the existence of relative Pareto minimizers for general multiobjective problems under a refined version of the subdifferential Palais-Smale condition for set-valued mappings with values in partially ordered spaces and then derive necessary optimality conditions for these minimizers (as well as for conventional efficient and weak efficient counterparts) that are new in both finite-dimensional and infinite-dimensional settings. Our proofs are based on variational and extremal principles of variational analysis; in particular, on new versions of the Ekeland variational principle and the subdifferential variational principle for set-valued and single-valued mappings in infinite-dimensional spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
48. A generalization of column generation to accelerate convergence.
- Author
-
Dong Liang and Wilhelm, Wilbert E.
- Subjects
- *
STOCHASTIC convergence , *CONSTRAINED optimization , *MATHEMATICAL reformulation , *MATHEMATICAL functions , *INTEGER programming , *MATHEMATICS - Abstract
This paper proposes a generalization of column generation, reformulating the master problem with fewer variables at the expense of adding more constraints; the sub-problem structure does not change. It shows both analytically and computationally that the reformulation promotes faster convergence to an optimal solution in application to a linear program and to the relaxation of an integer program at each node in the branch-and-bound tree. Further, it shows that this reformulation subsumes and generalizes prior approaches that have been shown to improve the rate of convergence in special cases. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
49. An integer programming approach for linear programs with probabilistic constraints.
- Author
-
Luedtke, James, Ahmed, Shabbir, and Nemhauser, George L.
- Subjects
- *
INTEGER programming , *LINEAR programming , *DYNAMIC programming , *CONSTRAINED optimization , *MATHEMATICAL programming , *MATHEMATICS - Abstract
Linear programs with joint probabilistic constraints (PCLP) are difficult to solve because the feasible region is not convex. We consider a special case of PCLP in which only the right-hand side is random and this random vector has a finite distribution. We give a mixed-integer programming formulation for this special case and study the relaxation corresponding to a single row of the probabilistic constraint. We obtain two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. We present computational results which indicate that by using our strengthened formulations, instances that are considerably larger than have been considered before can be solved to optimality. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. A bundle-filter method for nonsmooth convex constrained optimization.
- Author
-
Karas, Elizabeth, Ribeiro, Ademir, Sagastizábal, Claudia, and Solodov, Mikhail
- Subjects
- *
CONSTRAINED optimization , *NONSMOOTH optimization , *MATHEMATICAL optimization , *FILTERS (Mathematics) , *ALGORITHMS , *MATHEMATICAL programming - Abstract
For solving nonsmooth convex constrained optimization problems, we propose an algorithm which combines the ideas of the proximal bundle methods with the filter strategy for evaluating candidate points. The resulting algorithm inherits some attractive features from both approaches. On the one hand, it allows effective control of the size of quadratic programming subproblems via the compression and aggregation techniques of proximal bundle methods. On the other hand, the filter criterion for accepting a candidate point as the new iterate is sometimes easier to satisfy than the usual descent condition in bundle methods. Some encouraging preliminary computational results are also reported. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.