1,949 results
Search Results
2. COMPARISON RESULTS AND ESTIMATES ON THE GRADIENT WITHOUT STRICT CONVEXITY.
- Author
-
Cellina, A.
- Subjects
- *
PAPER arts , *INITIAL value problems , *MATHEMATICAL optimization , *MEASURE theory , *CONVEX domains , *ESTIMATES , *ESTIMATION theory , *MULTIPLE comparisons (Statistics) , *PROBLEM solving - Abstract
In this paper we establish a comparison result for solutions to the problem minimize ∫Ω f(∇u(x)) dx on {u : u - u0 ϵ W01,1 (Ω)}. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
3. REDUCING NONNEGATIVITY OVER GENERAL SEMIALGEBRAIC SETS TO NONNEGATIVITY OVER SIMPLE SETS.
- Author
-
KURYATNIKOVA, OLGA, VERA, JUAN C., and ZULUAGA, LUIS F.
- Subjects
SEMIALGEBRAIC sets ,SIMPLEX algorithm ,MATHEMATICAL optimization ,POLYNOMIALS - Abstract
A nonnegativity certificate (NNC) is a way to write a polynomial so that its nonnegativity on a Semialgebraic set becomes evident. Positivstellensatze (Psatze) guarantee the existence of NNCs. Both NNCs and Psatze underlie powerful algorithmic techniques for optimization. This paper proposes a universal approach to derive new Psatze for general Semialgebraic sets from ones developed for simpler sets, such as a box, a simplex, or the nonnegative orthant. We provide several results illustrating the approach. First, by considering Handelman's Positivstellensatz (Psatz) over a box, we construct non-SOS Schmüdgen-type Psatze over any compact Semialgebraic set, that is, a family of Psatze that follow the structure of the fundamental Schmiidgen's Psatz but where instead of SOS polynomials, any class of polynomials containing the nonnegative constants can be used, such as SONC, DSOS/SDSOS, hyperbolic, or sums of AM/GM polynomials. Second, by considering the simplex as the simple set, we derive a sparse Psatz over general compact sets which does not rely on any structural assumptions of the set. Finally, by considering Polya's Psatz over the nonnegative orthant, we derive a new non-SOS Psatz over unbounded sets which satisfy some generic conditions. All these results contribute to the literature regarding the use of non-SOS polynomials and sparse NNCs to derive Psatze over compact and unbounded sets. Throughout the article, we illustrate our results with relevant examples and numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A POSITIVE AND MOMENT-PRESERVING FOURIER SPECTRAL METHOD.
- Author
-
ZHENNING CAI, BO LIN, and MEIXIA LIN
- Subjects
SEPARATION of variables ,BOLTZMANN'S equation ,MATHEMATICAL optimization ,PROBLEM solving ,NEWTON-Raphson method ,OPTIMISM - Abstract
This paper presents a novel Fourier spectral method that utilizes optimization techniques to ensure the positivity and conservation of moments in the space of trigonometric polynomials. We rigorously analyze the accuracy of the new method and prove that it maintains spectral accuracy. To solve the optimization problem, we propose an efficient Newton solver that has a quadratic convergence rate. Numerical examples are provided to demonstrate the high accuracy of the proposed method. Our method is also integrated into the spectral solver of the Boltzmann equation, showing the benefit of our approach in applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. EXPOSITORY RESEARCH PAPERS.
- Author
-
Ipsen, Ilse
- Subjects
- *
MATHEMATICAL optimization , *IMPLEMENTS, utensils, etc. - Abstract
An introduction to the article "Topological Optimization of Rod-Stirring Devices," by Matthew Finna nd Jean-Lug Thiffeault, published in this issue is presented.
- Published
- 2011
- Full Text
- View/download PDF
6. REVISITING LANDSCAPE ANALYSIS IN DEEP NEURAL NETWORKS: ELIMINATING DECREASING PATHS TO INFINITY.
- Author
-
SHIYU LIANG, RUOYU SUN, and SRIKANT, R.
- Subjects
ARTIFICIAL neural networks ,MATHEMATICAL optimization - Abstract
Traditional landscape analysis of deep neural networks aims to show that no suboptimal local minima exist in some appropriate sense. From this, one may be tempted to conclude that descent algorithms which escape saddle points will reach a good local minimum. However, basic optimization theory tell us that it is also possible for a descent algorithm to diverge to infinity if there are paths leading to infinity, along which the loss function decreases. It is not clear whether for nonlinear neural networks there exists one setting such that "no bad local-min"" and "no decreasing paths to infinity"" can be simultaneously achieved. In this paper, we give the first positive answer to this question. More specifically, for a large class of overparameterized deep neural networks with appropriate regularizers, the loss function has no bad local minima and no decreasing paths to infinity. The key mathematical trick is to show that the set of regularizers, which may be undesirable, can be viewed as the image of a Lipschitz continuous mapping from a lower-dimensional Euclidean space to a higher-dimensional Euclidean space and thus has zero measure. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. A DIMENSION REDUCTION TECHNIQUE FOR LARGE-SCALE STRUCTURED SPARSE OPTIMIZATION PROBLEMS WITH APPLICATION TO CONVEX CLUSTERING.
- Author
-
YANCHENG YUAN, TSUNG-HUI CHANG, DEFENG SUN, and KIM-CHUAN TOH
- Subjects
MATHEMATICAL optimization ,SIEVES - Abstract
In this paper, we propose a novel adaptive sieving (AS) technique and an enhanced AS (EAS) technique, which are solver independent and can accelerate optimization algorithms for solving large-scale convex optimization problems with intrinsic structured sparsity. We establish the finite convergence property of the AS and EAS techniques with inexact solutions of the reduced subproblems. As an important application, we apply the AS and EAS techniques to the convex clustering model, which can accelerate the state-of-the-art algorithm SSNAL by more than 7 times and the algorithm ADMM by more than 14 times. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. ON THE COMPLEXITY OF AN INEXACT RESTORATION METHOD FOR CONSTRAINED OPTIMIZATION.
- Author
-
BUENO, LUÍS FELIPE and MARTÍNEZ, JOSÉ MARIO
- Subjects
CONSTRAINED optimization ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
Recent papers indicate that some algorithms for constrained optimization may exhibit worst-case complexity bounds that are very similar to those of unconstrained optimization algorithms. A natural question is whether well-established practical algorithms, perhaps with small variations, may enjoy analogous complexity results. In the present paper we show that the answer is positive with respect to inexact restoration algorithms in which first-order approximations are employed for defining the subproblems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. UNBIASED MLMC STOCHASTIC GRADIENT-BASED OPTIMIZATION OF BAYESIAN EXPERIMENTAL DESIGNS.
- Author
-
TAKASHI GODA, TOMOHIKO HIRONAKA, WATARU KITADE, and FOSTER, ADAM
- Subjects
OPTIMAL designs (Statistics) ,EXPERIMENTAL design ,MONTE Carlo method ,MATHEMATICAL optimization ,SEARCH algorithms - Abstract
In this paper we propose an efficient stochastic optimization algorithm to search for Bayesian experimental designs such that the expected information gain is maximized. The gradient of the expected information gain with respect to experimental design parameters is given by a nested expectation, for which the standard Monte Carlo method using a fixed number of inner samples yields a biased estimator. In this paper, applying the idea of randomized multilevel Monte Carlo methods, we introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared\ell 2-norm and finite expected computational cost per sample. Our unbiased estimator can be combined well with stochastic gradient descent algorithms, which results in our proposal of an optimization algorithm to search for an optimal Bayesian experimental design. Numerical experiments confirm that our proposed algorithm works well not only for a simple test problem but also for a more realistic pharmacokinetic problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. High-Dimensional Gaussian Sampling: A Review and a Unifying Approach Based on a Stochastic Proximal Point Algorithm.
- Author
-
Vono, Maxime, Dobigeon, Nicolas, and Chainais, Pierre
- Subjects
MARKOV chain Monte Carlo ,NUMERICAL solutions for linear algebra ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
Efficient sampling from a high-dimensional Gaussian distribution is an old but high-stakes issue. Vanilla Cholesky samplers imply a computational cost and memory requirements that can rapidly become prohibitive in high dimensions. To tackle these issues, multiple methods have been proposed from different communities ranging from iterative numerical linear algebra to Markov chain Monte Carlo (MCMC) approaches. Surprisingly, no complete review and comparison of these methods has been conducted. This paper aims to review all these approaches by pointing out their differences, close relations, benefits, and limitations. In addition to reviewing the state of the art, this paper proposes a unifying Gaussian simulation framework by deriving a stochastic counterpart of the celebrated proximal point algorithm in optimization. This framework offers a novel and unifying revisiting of most of the existing MCMC approaches while also extending them. Guidelines to choosing the appropriate Gaussian simulation method for a given sampling problem in high dimensions are proposed and illustrated with numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. A NOISE-TOLERANT QUASI-NEWTON ALGORITHM FOR UNCONSTRAINED OPTIMIZATION.
- Author
-
SHI, HAO-JUN M., YUCHEN XIE, BYRD, RICHARD, and NOCEDAL, JORGE
- Subjects
MATHEMATICAL optimization ,ERROR functions ,NONLINEAR functions ,QUASI-Newton methods ,ARITHMETIC - Abstract
This paper describes an extension of the BFGS and L-BFGS methods for the minimization of a nonlinear function subject to errors. This work is motivated by applications that contain computational noise, employ low-precision arithmetic, or are subject to statistical noise. The classical BFGS and L-BFGS methods can fail in such circumstances because the updating procedure can be corrupted and the line search can behave erratically. The proposed method addresses these difficulties and ensures that the BFGS update is stable by employing a lengthening procedure that spaces out the points at which gradient differences are collected. A new line search, designed to tolerate errors, guarantees that the Armijo-Wolfe conditions are satisfied under most reasonable conditions, and works in conjunction with the lengthening procedure. The proposed methods are shown to enjoy convergence guarantees for strongly convex functions. Detailed implementations of the methods are presented, together with encouraging numerical results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. POLICY MIRROR DESCENT FOR REGULARIZED REINFORCEMENT LEARNING: A GENERALIZED FRAMEWORK WITH LINEAR CONVERGENCE.
- Author
-
WENHAO ZHAN, SHICONG CEN, BAIHE HUANG, YUXIN CHEN, LEE, JASON D., and YUEJIE CHI
- Subjects
REINFORCEMENT learning ,MARKOV processes ,MIRRORS ,MATHEMATICAL optimization ,MATHEMATICS - Abstract
Policy optimization, which learns the policy of interest by maximizing the value function via large-scale optimization techniques, lies at the heart of modern reinforcement learning (RL). In addition to value maximization, other practical considerations arise commonly as well, including the need of encouraging exploration, and that of ensuring certain structural properties of the learned policy due to safety, resource, and operational constraints. These considerations can often be accounted for by resorting to regularized RL, which augments the target value function with a structure-promoting regularization term. Focusing on an infinite-horizon discounted tabular Markov decision process, this paper proposes a generalized policy mirror descent (GPMD) algorithm for solving regularized RL. As a generalization of policy mirror descent [G. Lan, Math. Program., 198 (2023), pp. 1059-1106], the proposed algorithm accommodates a general class of convex regularizers as well as a broad family of Bregman divergence in cognizance of the regularizer in use. We demonstrate that our algorithm converges linearly to the global solution over an entire range of learning rates, in a dimension-free fashion, even when the regularizer lacks strong convexity and smoothness. In addition, this linear convergence feature is provably stable in the face of inexact policy evaluation and imperfect policy updates. Numerical experiments are provided to corroborate the applicability and appealing performance of GPMD. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. OPTIMAL CONTROL OF THE THERMISTOR PROBLEM IN THREE SPATIAL DIMENSIONS, PART 2: OPTIMALITY CONDITIONS.
- Author
-
MEINLSCHMIDT, H., MEYER, C., and REHBERG, J.
- Subjects
OPTIMAL control theory ,MATHEMATICAL optimization ,THERMISTORS ,DIRECT currents ,PARTIAL differential equations - Abstract
This paper is concerned with the state-constrained optimal control of the threedimensional thermistor problem, a fully quasilinear coupled system of a parabolic and elliptic PDE with mixed boundary conditions. This system models the heating of a conducting material by means of direct current. Local existence, uniqueness, and continuity for the state system as well as existence of optimal solutions, admitting global-in-time solutions, to the optimization problem were shown in the the companion paper of this work. In this part, we address further properties of the set of controls whose associated solutions exist globally, such as openness, which includes analysis of the linearized state system via maximal parabolic regularity. The adjoint system involving measures is investigated using a duality argument. These results allow us to derive first-order necessary conditions for the optimal control problem in the form of a qualified optimality system in which we do not need to refer to the set of controls admitting global solutions. The theoretical findings are illustrated by numerical results. This work is the second of two papers on the three-dimensional thermistor problem. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. CONCURRENT SHAPE OPTIMIZATION OF THE PART AND SCANNING PATH FOR POWDER BED FUSION ADDITIVE MANUFACTURING.
- Author
-
BOISSIER, MATHILDE, ALLAIRE, GRÉGOIRE, and TOURNIER, CHRISTOPHE
- Subjects
STRUCTURAL optimization ,MANUFACTURING processes ,MATHEMATICAL optimization ,TWO-dimensional models ,CONSTRUCTION planning - Abstract
This paper investigates the concurrent path planning optimization and the built part structural optimization for powder bed fusion additive manufacturing processes. The state of the art studies rely on existing patterns for trajectories for a fixed built shape. The shape is often optimized for its mechanical performance but rarely in a combined way with its path planning building process. In this work, a two-dimensional model (in the layer plane) of the process is proposed under a steady state assumption. Then a systematic path optimization approach, free from a priori restrictions and previously developed in [M. Boissier, G. Allaire, and C. Tournier [Struct. Multidiscip. Optim., 61 (2020), pp. 2437--2466], is coupled to a structural optimization tool, both of them based on shape optimization theory. This multiphysics optimization leads to innovative and promising results. First, they confirm that it is essential to take into account the part shape in the scanning path optimization. Second, they also give hints to some design recipes: the material and the source parameters must be related to the thickness of the bars that compose the structure. Indeed, the thickness of a bar is a key ingredient which determines the type of path pattern to scan it: straight line, Omega-pattern, and Wave-pattern. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. RIEMANNIAN CONJUGATE GRADIENT METHODS: GENERAL FRAMEWORK AND SPECIFIC ALGORITHMS WITH CONVERGENCE ANALYSES.
- Author
-
HIROYUKI SATO
- Subjects
RIEMANNIAN manifolds ,CONJUGATE gradient methods ,EUCLIDEAN algorithm ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
Conjugate gradient methods are important first-order optimization algorithms both in Euclidean spaces and on Riemannian manifolds. However, while various types of conjugate gradient methods have been studied in Euclidean spaces, there are relatively fewer studies for those on Riemannian manifolds (i.e., Riemannian conjugate gradient methods). This paper proposes a novel general framework that unifies existing Riemannian conjugate gradient methods such as the ones that utilize a vector transport or inverse retraction. The proposed framework also develops other methods that have not been covered in previous studies. Furthermore, conditions for the convergence of a class of algorithms in the proposed framework are clarified. Moreover, the global convergence properties of several specific types of algorithms are extensively analyzed. The analysis provides the theoretical results for some algorithms in a more general setting than the existing studies and new developments for other algorithms. Numerical experiments are performed to confirm the validity of the theoretical results. The experimental results are used to compare the performances of several specific algorithms in the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. PROBABILISTICALLY CONSTRAINED LINEAR PROGRAMS AND RISK-ADJUSTED CONTROLLER DESIGN.
- Author
-
Lagoa, Constantino M., Xiang Li, and Sznaier, Mario
- Subjects
DISCRETE-time systems ,LINEAR time invariant systems ,PROBABILITY theory ,MATHEMATICAL analysis ,MATHEMATICAL optimization ,LINEAR algebra - Abstract
The focal point of this paper is the probabilistically constrained linear program (PCLP) and how it can be applied to control system, design under risk constraints. The PCLP is the counterpart of the classical linear program, where it is assumed that there is random uncertainty in the constraints and, therefore, the deterministic constraints are replaced by probabilistic ones. It is shown that for a wide class of probability density functions, called log-concave symmetric densities, the PCLP is a convex program. An equivalent formulation of the PCLP is also presented which provides insight into numerical implementation. This concept is applied to control system design. It is shown how the results in this paper can be applied to the design of controllers for discrete-time systems to obtain a closed loop system with a well-defined risk of violating the so-called property of svperstability. Furthermore, we address the problem of risk-adjusted pole placement. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
17. GENERALIZING THE OPTIMIZED GRADIENT METHOD FOR SMOOTH CONVEX MINIMIZATION.
- Author
-
DONGHWAN KIM and FESSLER, JEFFREY A.
- Subjects
CONVEX functions ,MATHEMATICAL bounds ,SMOOTHNESS of functions ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
This paper generalizes the optimized gradient method (OGM) [Y. Drori and M. Teboulle, Math. Program., 145 (2014), pp. 451-482], [D. Kim and J. A. Fessler, Math. Program., 159 (2016), pp. 81-107], [D. Kim and J. A. Fessler, J. Optim. Theory Appl., 172 (2017), pp. 187-205] that achieves the optimal worst-case cost function bound of first-order methods for smooth convex minimization [Y. Drori, J. Complexity, 39 (2017), pp. 1-16]. Specifically, this paper studies a generalized formulation of OGM and analyzes its worst-case rates in terms of both the function value and the norm of the function gradient. This paper also develops a new algorithm called OGM-OG that is in the generalized family of OGM and that has the best known analytical worst-case bound with rate O(1/N
1.5 ) on the decrease of the gradient norm among fixed-step first-order methods. This paper also proves that Nesterov's fast gradient method [Y. Nesterov, Dokl. Akad. Nauk. USSR, 269 (1983), pp. 543-547], [Y. Nesterov, Math. Program., 103 (2005), pp. 127-152] has an O(1/N1.5 ) worst-case gradient norm rate but with constant larger than OGM-OG. The proof is based on the worst-case analysis called the Performance Estimation Problem in [Y. Drori and M. Teboulle, Math. Program., 145 (2014), pp. 451-482]. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
18. ERGODICITY OF REGIME-SWITCHING FUNCTIONAL DIFFUSIONS WITH INFINITE DELAY AND APPLICATION TO A NUMERICAL ALGORITHM FOR STOCHASTIC OPTIMIZATION.
- Author
-
BANBAN SHI, YA WANG, and FUKE WU
- Subjects
LAW of large numbers ,MATHEMATICAL optimization ,STOCHASTIC approximation ,INFINITE processes ,INVARIANT measures - Abstract
It is well known that ergodicity and the strong law of large numbers (SLLN) play important roles in stochastic control and stochastic approximations. For a class of regime-switching functional diffusion processes with infinite delay, this paper establishes exponential ergodicity in a Wasserstein distance under certain "averaging conditions." It follows from such an ergodicity property that an SLLN for additive functionals of the regime-switching diffusions is obtained based on the property of uniform mixing. Finally, an example is presented to illustrate the application of ergodicity and the SLLN to a numerical algorithm for stochastic optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. IMPROVED DISCRETE BOUNDARY TYPE SHAPE GRADIENTS FOR PDE-CONSTRAINED SHAPE OPTIMIZATION.
- Author
-
WEI GONG, JIAJIE LI, and SHENGFENG ZHU
- Subjects
- *
STRUCTURAL optimization , *STOKES flow , *NEUMANN problem , *DIRICHLET problem , *MATHEMATICAL optimization , *STOKES equations - Abstract
We propose in this paper two kinds of continuity preserving discrete shape gradients of boundary type for PDE-constrained shape optimizations. First, a modified boundary shape gradient formula for shape optimization problems governed by elliptic Dirichlet problems was proposed recently based on the discrete variational outward normal derivatives. The advantages of this new formula over the previous one lie in the improved numerical accuracy and the continuity along the boundary. In the current paper we generalize this new formula to other shape optimization problems including the Laplace and Stokes eigenvalue optimization problems, the shape optimization of Stokes or Navier--Stokes flows, and the interface identification problems. We verify this new formula's numerical accuracy in different shape optimization problems and investigate its performance in several popular shape optimization algorithms. The second contribution of this paper is to propose a continuous discrete shape gradients of boundary type for Neumann problems, by using the ideas of gradient recovery techniques. The continuity property of the discrete boundary shape gradient is helpful in certain shape optimization algorithms and provides certain flexibility compared to the previous discontinuous ones, which are extensively discussed in the current paper. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. A GLOBAL DUAL ERROR BOUND AND ITS APPLICATION TO THE ANALYSIS OF LINEARLY CONSTRAINED NONCONVEX OPTIMIZATION.
- Author
-
JIAWEI ZHANG and ZHI-QUAN LUO
- Subjects
CONSTRAINED optimization ,MATHEMATICAL optimization ,POINT set theory - Abstract
Error bound analysis, which estimates the distance of a point to the solution set of an optimization problem4 using the optimality residual, is a powerful tool for the analysis of first-order optimization algorithms. In this paper, we use global error bound analysis to study the iteration complexity of a first-order algorithm for a linearly constrained nonconvex minimization problem. We develop a global dual error bound analysis for a regularized version of this nonconvex problem by using a novel "decomposition" technique. Equipped with this global dual error bound, we prove that a suitably designed primal-dual first-order method can generate an e-stationary solution of the linearly constrained nonconvex minimization problem within O(1/ε178;) iterations, which is the best known iteration complexity for this class of nonconvex problems. The iteration complexity of our algorithm for finding an e-stationary solution is O(1/ε178;), which improves the best known complexity of O(1/ε179;) for the problem under consideration. Furthermore, when the objective function is quadratic, we establish the linear convergence of the algorithm. Our proof is based on a new potential function and a novel use of error bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Directional Compactly Supported Tensor Product Complex Tight Framelets with Applications to Image Denoising and Inpainting.
- Author
-
Bin Han, Qun Mo, Zhenpeng Zhao, and Xiaosheng Zhuang
- Subjects
TENSOR products ,INPAINTING ,FILTER banks ,IMAGE denoising ,MATHEMATICAL optimization ,IMAGE processing - Abstract
Compactly supported tight framelets are of great interest and importance in both theory and application. In this paper we discuss how to construct directional compactly supported tensor product complex tight framelets having varied directionality and good performance for applications in image processing. Our construction algorithms employ optimization techniques and put extensive emphasis on frequency response and spatial localization of their underlying one-dimensional tight framelet filter banks. Several concrete examples of directional compactly supported tensor product complex tight framelet filter banks are provided in this paper. Our numerical experiments show that such constructed directional compactly supporte d tensor product complex tight framelets have good performance for applications such as image denoising and inpainting compared with several other state-of-the-art transform-based methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. STOCHASTIC MULTILEVEL COMPOSITION OPTIMIZATION ALGORITHMS WITH LEVEL-INDEPENDENT CONVERGENCE RATES.
- Author
-
BALASUBRAMANIAN, KRISHNAKUMAR, GHADIMI, SAEED, and NGUYEN, ANTHONY
- Subjects
MATHEMATICAL optimization ,PROBLEM solving ,ALGORITHMS ,MOVING average process - Abstract
In this paper, we study smooth stochastic multilevel composition optimization problems, where the objective function is a nested composition of T functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze their convergence to an e-stationary point of the problem. We show that the first algorithm, which is a generalization of [S. Ghadimi, A. Ruszczynski, and M. Wang, SIAM J. Optim., 30 (2020), pp. 960-979] to the T level case, can achieve a sample complexity of O
T (1/ε6 ) by using minibatches of samples in each iteration, where OT hides constants that depend on T. By modifying this algorithm using linearized stochastic estimates of the function values, we improve the sample complexity to OT (1/ε4 ). This modification not only removes the requirement of having a minibatch of samples in each iteration, but also makes the algorithm parameter-free and easy to implement. To the best of our knowledge, this is the first time that such an online algorithm designed for the (un)constrained multilevel setting obtains the same sample complexity of the smooth single-level setting, under standard assumptions (unbiasedness and boundedness of the second moments) on the stochastic first-order oracle. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
23. DEGENERATE FIRST-ORDER QUASI-VARIATIONAL INEQUALITIES: AN APPROACH TO APPROXIMATE THE VALUE FUNCTION.
- Author
-
EL FAROUQ, NAÏMA
- Subjects
COST functions ,MATHEMATICAL optimization ,APPROXIMATION theory ,DERIVATIVES (Mathematics) ,PRODUCTION management (Manufacturing) - Abstract
The originality of this paper is to deal with the particular case of null infimum jump costs in the infinite horizon impulse control problem. The value function of such problems is a viscosity solution of the classic quasi-variational inequality (QVI) associated, but not the unique one. This is a drawback to characterize it. In this paper, a new QVI for which the value function is the unique viscosity solution is given. This allows us to approximate the value function. So, we give some discrete approximations of the new QVI and prove that the approximate value function converges locally uniformly, toward the value function of the impulse control problem with zero lower bound of impulse cost. We choose the classic example of continuous time inventory control in R
n to illustrate the results of this paper. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
24. MG-FIM: A MULTI-GPU FAST ITERATIVE METHOD USING ADAPTIVE DOMAIN DECOMPOSITION.
- Author
-
SUMIN HONG, GANGHEE JANG, and WON-KI JEONG
- Subjects
DOMAIN decomposition methods ,EIKONAL equation ,PARALLEL programming ,GRAPHICS processing units ,MATHEMATICAL optimization ,DECOMPOSITION method - Abstract
Applying the latest parallel computing technology has become a recent trend in Eikonal equation solvers. Many recent studies have focused on parallelization of Eikonal solvers for multithreaded CPUs or single GPU systems. However, multi-GPU Eikonal solvers are largely underresearched owing to their complexity in terms of data and task management. In this paper, we propose a novel adaptive domain decomposition method to realize an efficient implementation of the block-based fast iterative method on multiple GPUs. The proposed method progressively expands the computational domain assigned to each GPU to maximize load balancing and employs a locality-aware clustering algorithm to minimize inter-GPU communication overhead. We also propose various low- and high-level optimization techniques for GPU computing, such as overlapping CPU and GPU computation and inter-GPU data transfer using multiple CUDA streams. Thus, we effectively circumvent performance issues in the naïve parallelization using a regular decomposition method. The proposed method scales up to 6.6\times for eight GPUs. We demonstrate that our efficient parallel implementation of the proposed method achieves an improvement in runtime performance under various experimental setups. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Learnable Empirical Mode Decomposition based on Mathematical Morphology.
- Author
-
Velasco-Forero, Santiago, Pagès, R., and Angulo, Jesus
- Subjects
HILBERT-Huang transform ,MATHEMATICAL morphology ,MATHEMATICAL decomposition ,MATHEMATICAL optimization - Abstract
Empirical mode decomposition (EMD) is a fully data driven method for multiscale decomposing signals into a set of components known as intrinsic mode functions. EMD is based on lower and upper envelopes of the signal in an iterated decomposition scheme. In this paper, we put forward a simple yet effective method to learn EMD from data by means of morphological operators. We propose an end-to-end framework by incorporating morphological EMD operators into deeply learned representations, trained using standard backpropagation principle and gradient descent-based optimization algorithms. Three generalizations of morphological EMD are proposed: (a) by varying the family of structuring functions, (b) by varying the pair of morphological operators used to calculate the envelopes, and (c) by considering a convex sum of envelopes instead of the mean point used in classical EMD. We discuss in particular the invariances that are induced by the morphological EMD representation. Experimental results on supervised classification of hyperspectral images by one-dimensional convolutional networks demonstrate the interest of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. MAXIMUM BLOCK IMPROVEMENT AND POLYNOMIAL OPTIMIZATION.
- Author
-
Bilian Chen, Simai He, Zhening Li, and Shuzhong Zhang
- Subjects
POLYNOMIALS ,MATHEMATICAL optimization ,ALGEBRA ,ALGORITHMS ,MATHEMATICAL inequalities - Abstract
In this paper we propose an efficient method for solving the spherically constrained homogeneous polynomial optimization problem. The new approach has the following three main ingredients. First, we establish a block coordinate descent type search method for nonlinear optimization, with the novelty being that we accept only a block update that achieves the maximum improvement, hence the name of our new search method: maximum block improvement (MBI). Convergence of the sequence produced by the MBI method to a stationary point is proved. Second, we establish that maximizing a homogeneous polynomial over a sphere is equivalent to its tensor relaxation problem; thus we can maximize a homogeneous polynomial function over a sphere by its tensor relaxation via the MBI approach. Third, we propose a scheme to reach a KKT point of the polynomial optimization, provided that a stationary solution for the relaxed tensor problem is available. Numerical experiments have shown that our new method works very efficiently: for a majority of the test instances that we have experimented with, the method finds the global optimal solution at a low computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
27. DISTRIBUTED SUBGRADIENT-FREE STOCHASTIC OPTIMIZATION ALGORITHM FOR NONSMOOTH CONVEX FUNCTIONS OVER TIME-VARYING NETWORKS.
- Author
-
YINGHUI WANG, WENXIAO ZHAO, YIGUANG HONG, and MOHSEN ZAMANI
- Subjects
NONSMOOTH optimization ,TIME-varying networks ,CONVEX functions ,DISTRIBUTED algorithms ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
In this paper we consider a distributed stochastic optimization problem without gradient/subgradient information for local objective functions and subject to local convex constraints. Objective functions may be nonsmooth and observed with stochastic noises, and the network for the distributed design is time-varying. By adding stochastic dithers to local objective functions and constructing randomized differences motivated by the Kiefer--Wolfowitz algorithm, we propose a distributed subgradient-free algorithm for finding the global minimizer with local observations. Moreover, we prove that the consensus of estimates and global minimization can be achieved with probability one over the time-varying network, and we obtain the convergence rate of the mean average of estimates as well. Finally, we give numerical examples to illustrate the performance of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. OPTIMIZATION METHODS ON RIEMANNIAN MANIFOLDS VIA EXTREMUM SEEKING ALGORITHMS.
- Author
-
TARINGOO, FARZIN, DOWER, PETER M., NEŠIĆ, DRAGAN, and YING TAN
- Subjects
RIEMANNIAN manifolds ,MATHEMATICAL optimization ,VARIATIONAL principles ,EUCLIDEAN domains ,PERTURBATION theory - Abstract
This paper formulates the problem of extremum seeking for optimization of cost functions defined on Riemannian manifolds. We extend the conventional extremum seeking algorithms for optimization problems in Euclidean spaces to optimization of cost functions defined on smooth Riemannian manifolds. This problem falls within the category of online optimization methods. We introduce the notion of geodesic dithers, which is a perturbation of the optimizing trajectory in the tangent bundle of the ambient state manifolds, and obtain the extremum seeking closed loop as a perturbation of the averaged gradient system. The main results are obtained by applying closeness of solutions and averaging theory on Riemannian manifolds. The main results are further extended for optimization on Lie groups. Numerical examples on the Stiefel manifold V3,2 and the Lie group SEp3q are presented at the end of the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. DELAY-INTEGRAL-QUADRATIC CONSTRAINTS AND ABSOLUTE STABILITY OF TIME-PERIODIC FEEDBACK SYSTEMS.
- Author
-
Altshuller, D. A.
- Subjects
INTEGRAL (Network analysis) ,NONLINEAR systems ,GEOMETRY ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,QUADRATIC equations ,MATHEMATICAL inequalities ,PROBABILITY theory ,TIME delay systems - Abstract
The paper considers the problem of absolute stability of systems with time-periodic nonlinear blocks. The approach presented in this paper is based on the so-called quadratic criterion and relies on integral-quadratic inequalities (constraints), which also involve time delays. The results are given in the frequency domain. The paper also includes geometric interpretation and numerical treatment of the new stability criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
30. A Subgradient Algorithm for Data-Rate Optimization in the Remote State Estimation Problem.
- Author
-
Kawan, Christoph, Hafstein, Sigurdur, and Giesl, Peter
- Subjects
KALMAN filtering ,MATHEMATICAL optimization ,RIEMANNIAN metric ,VECTOR fields ,METRIC spaces ,DIGITAL communications - Abstract
In the remote state estimation problem, an observer tries to reconstruct the state of a dynamical system at a remote location, where no direct sensor measurements are available. The observer only has access to information sent through a digital communication channel with a finite capacity. The recently introduced notion of restoration entropy provides a way to determine the smallest channel capacity above which an observer can be designed that observes the system without a degradation of the initial observation quality. In this paper, we propose a subgradient algorithm to estimate the restoration entropy via the computation of an appropriate Riemannian metric on the state space, which allows us to determine the approximate value of the entropy from the time-one map (in the discrete-time case) or the generating vector field (for ODE systems), respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. LOCAL CONVERGENCE ANALYSIS OF AUGMENTED LAGRANGIAN METHODS FOR PIECEWISE LINEAR-QUADRATIC COMPOSITE OPTIMIZATION PROBLEMS.
- Author
-
HANG, NGUYEN T. V. and SARABI, M. EBRAHIM
- Subjects
MATHEMATICAL optimization ,LAGRANGE multiplier - Abstract
Second-order sufficient conditions for local optimality have been playing an important role in local convergence analysis of optimization algorithms. In this paper, we demonstrate that this condition alone suffices to justify the linear convergence of the primal-dual sequence, generated by the augmented Lagrangian method for piecewise linear-quadratic composite optimization problems, even when the Lagrange multiplier in this class of problems is not unique. Furthermore, we establish the equivalence between the second-order sufficient condition and the quadratic growth condition of the augmented Lagrangian problem for this class of composite optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. On the Convergence Rate of Projected Gradient Descent for a Back-Projection Based Objective.
- Author
-
Tirer, Tom and Giryes, Raja
- Subjects
INVERSE problems ,LINEAR operators ,MATHEMATICAL optimization ,LENGTH measurement ,IMAGE reconstruction - Abstract
Ill-posed linear inverse problems appear in many scientific setups and are typically addressed by solving optimization problems, which are composed of data fidelity and prior terms. Recently, several works have considered a back-projection (BP) based fidelity term as an alternative to the common least squares (LS) and demonstrated excellent results for popular inverse problems. These works have also empirically shown that using the BP term, rather than the LS term, requires fewer iterations of optimization algorithms. In this paper, we examine the convergence rate of the projected gradient descent algorithm for the BP objective. Our analysis allows us to identify an inherent source for its faster convergence compared to using the LS objective, while making only mild assumptions. We also analyze the more general proximal gradient method under a relaxed contraction condition on the proximal mapping of the prior. This analysis further highlights the advantage of BP when the linear measurement operator is badly conditioned. Numerical experiments with both -1-norm and GAN based priors corroborate our theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. EQUIVARIANT SEMIDEFINITE LIFTS AND SUM-OF-SQUARES HIERARCHIES.
- Author
-
FAWZI, HAMZA, SAUNDERSON, JAMES, and PARRILO, PABLO A.
- Subjects
SEMIDEFINITE programming ,POLYTOPES ,MATHEMATICAL optimization ,SUM of squares ,PROBLEM solving ,MATHEMATICAL decomposition - Abstract
A central question in optimization is to maximize (or minimize) a linear function over a given polytope P. To solve such a problem in practice one needs a concise description of the polytope P. In this paper we are interested in representations of P using the positive semidefinite cone: a positive semidefinite lift (PSD lift) of a polytope P is a representation of P as the projection of an affine slice of the positive semidefinite cone S
d + . Such a representation allows linear optimization problems over P to be written as semidefinite programs of size d. Such representations can be beneficial in practice when d is much smaller than the number of facets of the polytope P. In this paper we are concerned with so-called equivariant PSD lifts (also known as symmetric PSD lifts) which respect the symmetries of the polytope P. We present a representation-theoretic framework to study equivariant PSD lifts of a certain class of symmetric polytopes known as orbitopes. Ourmain result is a structure theorem where we show that any equivariant PSD lift of size d of an orbitope is of sum-of-squares type where the functions in the sum-of-squares decomposition come from an invariant subspace of dimension smaller than d3 . We use this framework to study two well-known families of polytopes, namely the parity polytope and the cut polytope, and we prove exponential lower bounds for equivariant PSD lifts of these polytopes. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
34. NONLINEAR POWER-LIKE AND SVD-LIKE ITERATIVE SCHEMES WITH APPLICATIONS TO ENTANGLED BIPARTITE RANK-1 APPROXIMATION.
- Author
-
CHU, MOODY T. and LIN, MATTHEW M.
- Subjects
EIGENVALUES ,KRONECKER products ,NONLINEAR equations ,CONVEX sets ,BIPARTITE graphs ,DENSITY matrices ,MATHEMATICAL optimization ,MATRIX multiplications - Abstract
Gauging the distance between a mixed state and the convex set of separable states in a bipartite quantum mechanical system over the complex field is an important but challenging task. As a first step toward this difficult problem, this paper investigates the rank-1 approximation of a bipartite system over the real field where the entanglement is characterized in terms of the Kronecker product of density matrices. The approximation is recast in the form of a nonlinear eigenvalue problem and a nonlinear singular value problem for which two iterative methods are proposed, respectively. This study offers insight into and might serve as the building block for the more complicated multipartite systems and higher-rank approximation problems. The main focus is on the convergence analysis. Numerical experiments seem to suggest that these easily constructed solvers have higher efficiency when comparing with some state-of-the-art optimization techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. A LAGRANGE MULTIPLIER EXPRESSION METHOD FOR BILEVEL POLYNOMIAL OPTIMIZATION.
- Author
-
JIAWANG NIE, LI WANG, YE, JANE J., and ZHONG, SUHAN
- Subjects
BILEVEL programming ,ALGORITHMS ,MATHEMATICAL optimization ,LAGRANGE multiplier ,POLYNOMIALS - Abstract
This paper studies bilevel polynomial optimization. We propose a method to solve it globally by using polynomial optimization relaxations. Each relaxation is obtained from the Karush-Kuhn-Tucker (KKT) conditions for the lower level optimization and the exchange technique for semi-infinite programming. For KKT conditions, Lagrange multipliers are represented as polynomial or rational functions. The Moment-sum-of-squares relaxations are used to solve the polynomial optimization relaxations. Under some general assumptions, we prove the convergence of the algorithm for solving bilevel polynomial optimization problems. Numerical experiments are presented to show the efficiency of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. NONLINEAR CONJUGATE GRADIENT METHODS FOR PDE CONSTRAINED SHAPE OPTIMIZATION BASED ON STEKLOV--POINCARÉ-TYPE METRICS.
- Author
-
BLAUTH, SEBASTIAN
- Subjects
CONJUGATE gradient methods ,STRUCTURAL optimization ,CONSTRAINED optimization ,PARTIAL differential equations ,MATHEMATICAL optimization ,BENCHMARK problems (Computer science) - Abstract
Shape optimization based on shape calculus has received a lot of attention in recent years, particularly regarding the development, analysis, and modification of efficient optimization algorithms. In this paper we propose and investigate nonlinear conjugate gradient methods based on Steklov--Poincar\'e-type metrics for the solution of shape optimization problems constrained by partial differential equations. We embed these methods into a general algorithmic framework for gradientbased shape optimization methods and discuss the numerical discretization of the algorithms. We numerically compare the proposed nonlinear conjugate gradient methods to the already established gradient descent and limited memory BFGS methods for shape optimization on several benchmark problems. The results show that the proposed nonlinear conjugate gradient methods perform well in practice and that they are an efficient and attractive addition to already established gradient-based shape optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. CONVERGENCE OF TRUST-REGION METHODS BASED ON PROBABILISTIC MODELS.
- Author
-
BANDEIRA, A. S., SCHEINBERG, K., and VICENTE, L. N.
- Subjects
PROBABILISTIC generative models ,PROBABILITY theory ,MATHEMATICAL optimization ,STOCHASTIC processes ,INTERPOLATION - Abstract
In this paper we consider the use of probabilistic or random models within a classical trust-region framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume that the value of the function itself can be computed without noise, in other words, that the function is deterministic. Second, we use random models of higher quality than those produced by the usual stochastic gradient methods. In particular, a first order model based on random approximation of the gradient is required to provide sufficient quality of approximation with probability ≥ 1/2. This is in contrast with stochastic gradient approaches, where the model is assumed to be "correct" only in expectation. As a result of this particular setting, we are able to prove convergence, with probability one, of a trust-region method which is almost identical to the classical method. Moreover, the new method is simpler than its deterministic counterpart as it does not require a criticality step. Hence we show that a standard optimization framework can be used in cases when models are random and may or may not provide good approximations, as long as "good" models are more likely than "bad" models. Our results are based on the use of properties of martingales. Our motivation comes from using random sample sets and interpolation models in derivative-free optimization. However, our framework is general and can be applied with any source of uncertainty in the model. We discuss various applications for our methods in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
38. A LINESEARCH-BASED DERIVATIVE-FREE APPROACH FOR NONSMOOTH CONSTRAINED OPTIMIZATION.
- Author
-
FASANO, G., LIUZZI, G., LUCIDI, S., and RINALDI, F.
- Subjects
LIPSCHITZ spaces ,MATHEMATICAL optimization ,CONSTRAINED optimization ,FUNCTION spaces ,DIRECTIONAL derivatives - Abstract
In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke-Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation point, under standard assumptions on the search directions, of the generated sequence of iterates is a stationary point of the original constrained problem. In the last part of the paper, we report extended numerical results on both bound-constrained and nonlinearly constrained problems, showing that our approach is promising when compared to some state-of-the-art codes from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. THE CONVEX FEASIBLE SET ALGORITHM FOR REAL TIME OPTIMIZATION IN MOTION PLANNING.
- Author
-
CHANGLIU LIU, CHUNG-YEN LIN, and MASAYOSHI TOMIZUKA
- Subjects
COMPUTER programming ,MATHEMATICAL optimization ,ARTIFICIAL intelligence ,OPTIMAL control theory ,ALGORITHMS - Abstract
With the development of robotics, there are growing needs for real time motion planning. However, due to obstacles in the environment, the planning problem is highly nonconvex, which makes it difficult to achieve real time computation using existing nonconvex optimization algorithms. This paper introduces the convex feasible set algorithm, which is a fast algorithm for nonconvex optimization problems that have convex costs and nonconvex constraints. The idea is to find a convex feasible set for the original problem and iteratively solve a sequence of subproblems using the convex constraints. The feasibility and the convergence of the proposed algorithm are proved in the paper. The application of this method on motion planning for mobile robots is discussed. The simulations demonstrate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. ON DISCRETE SHAPE GRADIENTS OF BOUNDARY TYPE FOR PDE-CONSTRAINED SHAPE OPTIMIZATION.
- Author
-
WEI GONG and SHENGFENG ZHU
- Subjects
STRUCTURAL optimization ,ADJOINT differential equations ,BOUNDARY value problems ,MATHEMATICAL optimization ,EQUATIONS of state - Abstract
Shape gradients have been widely used in numerical shape gradient descent algorithms for shape optimization. The two types of shape gradients, i.e., the distributed one and the boundary type, are equivalent at the continuous level but exhibit different numerical behaviors after finite element discretization. To be more specific, the boundary type shape gradient is more popular in practice due to its concise formulation and convenience in combining with shape optimization algorithms but has lower numerical accuracy. In this paper we provide a simple yet useful boundary correction for the normal derivatives of the state and adjoint equations, motivated by their continuous variational forms, to increase the accuracy and possible effectiveness of the boundary shape gradient in PDE-constrained shape optimization. We consider particularly the state equation with Dirichlet boundary conditions and provide a preliminary error estimate for the correction. Numerical results show that the corrected boundary type shape gradient has comparable accuracy to that of the distributed one. Moreover, we give a theoretical explanation for the comparable numerical accuracy of the boundary type shape gradient with that of the distributed shape gradient for Neumann boundary value problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. CONSISTENT APPROXIMATIONS FOR THE OPTIMAL CONTROL OF CONSTRAINED SWITCHED SYSTEMS--PART 1: A CONCEPTUAL ALGORITHM.
- Author
-
VASUDEVAN, RAMANARAYAN, GONZALEZ, HUMBERTO, BAJCSY, RUZENA, and SASTRY, S. SHANKAR
- Subjects
OPTIMAL control theory ,SWITCHING theory ,MATHEMATICAL optimization ,APPROXIMATION theory ,MATHEMATICAL functions - Abstract
Switched systems, or systems whose control parameters include a continuous-valued input and a discrete-valued input which corresponds to the mode of the system that is active at a particular instance in time, have shown to be highly effective in modeling a variety of physical phenomena. Unfortunately, the construction of an optimal control algorithm for such systems has proved difficult since it demands some form of optimal mode scheduling. In a pair of papers, we construct a first order optimization algorithm to address this problem. Our approach, which we prove in this paper converges to local minimizers of the constrained optimal control problem, first relaxes the discrete-valued input, performs traditional optimal control, and then projects the constructed relaxed discrete-valued input back to a pure discrete-valued input by employing an extension to the classical chattering lemma that we formalize. In the second part of this pair of papers, we describe how this conceptual algorithm can be recast in order to devise an implementable algorithm that constructs a sequence of points by recursive application that converge to local minimizers of the optimal control problem for switched systems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
42. TUNING MULTIGRID METHODS WITH ROBUST OPTIMIZATION AND LOCAL FOURIER ANALYSIS.
- Author
-
BROWN, JED, YUNHUI HE, MACLACHLAN, SCOTT, MENICKELLY, MATT, and WILD, STEFAN M.
- Subjects
ROBUST optimization ,FOURIER analysis ,MULTIGRID methods (Numerical analysis) ,DOMAIN decomposition methods ,ANALYTICAL solutions ,NUMBER systems ,MATHEMATICAL optimization - Abstract
Local Fourier analysis is a useful tool for predicting and analyzing the performance of many efficient algorithms for the solution of discretized PDEs, such as multigrid and domain decomposition methods. The crucial aspect of local Fourier analysis is that it can be used to minimize an estimate of the spectral radius of a stationary iteration, or the condition number of a preconditioned system, in terms of a symbol representation of the algorithm. In practice, this is a "minimax"" problem, minimizing with respect to solver parameters the appropriate measure of solver work, which involves maximizing over the Fourier frequency. Often, several algorithmic parameters may be determined by local Fourier analysis in order to obtain efficient algorithms. Analytical solutions to minimax problems are rarely possible beyond simple problems; the status quo in local Fourier analysis involves grid sampling, which is prohibitively expensive in high dimensions. In this paper, we propose and explore optimization algorithms to solve these problems efficiently. Several examples, with known and unknown analytical solutions, are presented to show the effectiveness of these approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. What Tropical Geometry Tells Us about the Complexity of Linear Programming.
- Author
-
Allamigeon, Xavier, Benchimol, Pascal, Gaubert, Stéphane, and Joswig, Michael
- Subjects
GEOMETRY ,MATHEMATICAL optimization ,LINEAR programming ,GAME theory ,INTERIOR-point methods ,ALGORITHMS - Abstract
Tropical geometry has been recently used to obtain new complexity results in convex optimization and game theory. In this paper, we present an application of this approach to a famous class of algorithms for linear programming, i.e., log-barrier interior point methods. We show that these methods are not strongly polynomial by constructing a family of linear programs with 3r + 1 inequalities in dimension 2r for which the number of iterations performed is in Ω(2
r ). The total curvature of the central path of these linear programs is also exponential in r, disproving a continuous analogue of the Hirsch conjecture proposed by Deza, Terlaky, and Zinchenko. These results are obtained by analyzing the tropical central path, which is the piecewise linear limit of the central paths of parameterized families of classical linear programs viewed through "logarithmic glasses." This allows us to provide combinatorial lower bounds for the number of iterations and the total curvature in a general setting. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
44. TRUST-REGION NEWTON-CG WITH STRONG SECOND-ORDER COMPLEXITY GUARANTEES FOR NONCONVEX OPTIMIZATION.
- Author
-
CURTIS, FRANK E., ROBINSON, DANIEL P., ROYER, CLÉMENT W., and WRIGHT, STEPHEN J.
- Subjects
NEWTON-Raphson method ,CONJUGATE gradient methods ,MATHEMATICAL optimization ,SURETYSHIP & guaranty ,QUASI-Newton methods ,LANCZOS method - Abstract
Worst-case complexity guarantees for nonconvex optimization algorithms have been a topic of growing interest. Multiple frameworks that achieve the best known complexity bounds among a broad class of first- and second-order strategies have been proposed. These methods have often been designed primarily with complexity guarantees in mind and, as a result, represent a departure from the algorithms that have proved to be the most effective in practice. In this paper, we consider trust-region Newton methods, one of the most popular classes of algorithms for solving nonconvex optimization problems. By introducing slight modifications to the original scheme, we obtain two methods--one based on exact subproblem solves and one exploiting inexact subproblem solves as in the popular "trust-region Newton-conjugate gradient" (trust-region Newton-CG) method--with iteration and operation complexity bounds that match the best known bounds for the aforementioned class of first- and second-order methods. The resulting trust-region Newton-CG method also retains the attractive practical behavior of classical trust-region Newton-CG, which we demonstrate with numerical comparisons on a standard benchmark test set. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. DUALITY AND STABILITY IN COMPLEX MULTIAGENT STATE-DEPENDENT NETWORK DYNAMICS.
- Author
-
ETESAMI, S. RASOUL
- Subjects
MULTIAGENT systems ,LYAPUNOV stability ,NEWTON-Raphson method ,SUBGRADIENT methods ,MATHEMATICAL optimization ,INTERIOR-point methods ,SEQUENTIAL analysis - Abstract
Despite significant progress on stability analysis of conventional multiagent networked systems with weakly coupled state-network dynamics, most of the existing results have shortcomings in addressing multiagent systems with highly coupled state-network dynamics. Motivated by numerous applications of such dynamics, in our previous work [SIAM J. Control Optim., 57 (2019), pp. 1757-1782], we initiated a new direction for stability analysis of such systems that uses a sequential optimization framework. Building upon that, in this paper, we extend our results by providing another angle on multiagent network dynamics from a duality perspective, which allows us to view the network structure as dual variables of a constrained nonlinear program. Leveraging that idea, we show that the evolution of the coupled state-network multiagent dynamics can be viewed as iterates of a primal-dual algorithm for a static constrained optimization/saddle-point problem. This view bridges the Lyapunov stability of state-dependent network dynamics and frequently used optimization techniques such as block coordinated descent, mirror descent, the Newton method, and the subgradient method. As a result, we develop a systematic framework for analyzing the Lyapunov stability of state-dependent network dynamics using techniques from nonlinear optimization. Finally, we support our theoretical results through numerical simulations from social science. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. NEW CONSTRAINT QUALIFICATIONS FOR OPTIMIZATION PROBLEMS IN BANACH SPACES BASED ON ASYMPTOTIC KKT CONDITIONS.
- Author
-
BÖRGENS, EIKE, KANZOW, CHRISTIAN, MEHLITZ, PATRICK, and WACHSMUTH, GERD
- Subjects
BANACH spaces ,MATHEMATICAL optimization ,ORDERED sets - Abstract
Optimization theory in Banach spaces suffers from a lack of available constraint qualifications. There exist very few constraint qualifications, and these are often violated even in simple applications. This is very much in contrast to finite-dimensional nonlinear programs, where a large number of constraint qualifications is known. Since these constraint qualifications are usually defined using the set of active inequality constraints, it is difficult to extend them to the infinite-dimensional setting. One exception is a recently introduced sequential constraint qualification based on asymptotic KKT conditions. This paper shows that this so-called asymptotic KKT regularity allows suitable extensions to the Banach space setting in order to obtain new constraint qualifications. The relation of these new constraint qualifications to existing ones is discussed in detail. Their usefulness is also shown by several examples as well as an algorithmic application to the class of augmented Lagrangian methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. OPTIMAL CONTROL OF NONSMOOTH, SEMILINEAR PARABOLIC EQUATIONS.
- Author
-
MEYER, CHRISTIAN and SUSU, LIVIA M.
- Subjects
OPTIMAL control theory ,DIFFERENTIAL equations ,HEAT equation ,CALCULUS of variations ,MATHEMATICAL optimization - Abstract
This paper is concerned with an optimal control problem governed by a semilinear, nonsmooth operator differential equation. The nonlinearity is locally Lipschitz-continuous and directionally differentiable but not Gateaux-differentiable. By employing the limited differentiability properties of the control-to-state map, first-order necessary optimality conditions in qualified form are established, which are equivalent to the purely primal condition saying that the directional derivative of the reduced objective in feasible directions is nonnegative. The paper ends with the application of the general results to a semilinear heat equation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. PRIMAL-DUAL ALGORITHMS FOR OPTIMIZATION WITH STOCHASTIC DOMINANCE.
- Author
-
HASKELL, WILLIAM B., SHANTHIKUMAR, J. GEORGE, and SHEN, Z. MAX
- Subjects
ITERATIVE methods (Mathematics) ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC dominance ,MATHEMATICAL bounds - Abstract
Stochastic dominance, a pairwise comparison between random variables, is an effective tool for expressing risk aversion in stochastic optimization. In this paper, we develop a family of primal-dual algorithms for optimization problems with stochastic dominance-constraints. First, we develop an offline primal-dual algorithm and bound its optimality gap as a function of the number of iterations. Then, we extend this algorithm to the online setting where only one random sample is given in each decision epoch. We give probabilistic bounds o n the optimality gap in this setting. This technique also yields an online algorithm for the stochastic dominance-constrained multiarmed bandit with partial feedback. The paper concludes by discussing a dual approach for a batch learning problem with robust stochastic dominance constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. RANDOM CONVEX PROGRAMS WITH L1-REGULARIZATION: SPARSITY AND GENERALIZATION.
- Author
-
CAMPI, M. C. and CARÈ, A.
- Subjects
CONVEX geometry ,MATHEMATICAL optimization ,MODULES (Algebra) ,MATHEMATICAL regularization ,RANDOM variables - Abstract
Random convex programs are convex optimization problems that are robust with respect to a finite number of randomly sampled instances of an uncertain variable δ. This paper studies random convex programs in which there is uncertainty in the objective function. Specifically, let L(x, δ) be a loss function that is convex in x, the optimization variable, while it has an arbitrary dependence on the random variable d representing uncertainty in the optimization problem. After sampling N instances δ
(1) , δ(N) of the random variable δ, the random convex program can be written as follows: minx maxi L(x, δ(i) ). The fundamental feature of this program is that its value LN * = maxi L(xN * ,δ(i) ), where xN * is the solution, remains guaranteed when xN * is applied to the vast majority of the other unseen instances of δ; that is, L(xN * , δ) ≤ LN * holds with high probability with respect to the uncertain variable δ. This generalization property has justified a systematic and rigorous use of randomization in robust optimization. In this paper, we introduce L1 -regularization in random convex programs and show that L1 -regularization boosts the above generalization property so that generalization is achieved with significantly fewer samples than in the standard convex program given above. Explicit bounds are derived that allow a rigorous and easy implementation of the method. [ABSTRACT FROM AUTHOR]- Published
- 2013
- Full Text
- View/download PDF
50. STRONG DUALITY IN CONE CONSTRAINED NONCONVEX OPTIMIZATION.
- Author
-
FLORES-BAZÁN, FABIÁN and MASTROENI, GIANDOMENICO
- Subjects
MATHEMATICAL optimization ,NONCONVEX programming ,DUALITY theory (Mathematics) ,FINITE difference method ,MATHEMATICS problems & exercises - Abstract
In this paper we deepen the analysis of the conditions that ensure strong duality for a cone constrained nonconvex optimization problem. We first establish a necessary and sufficient condition for the validity of strong duality without convexity assumptions with a possibly empty solution set of the original problem, and second, via Slater-type conditions involving quasi interior or quasirelative interior notions, various results about strong duality are also obtained. Our conditions can be used where no previous result is applicable, even in a finite dimensional or convex setting. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.