65 results on '"Mathematics - Optimization and Control"'
Search Results
2. Duality of optimization problems with gauge functions
- Author
-
Shota Yamanaka and Nobuo Yamashita
- Subjects
Control and Optimization ,Optimization and Control (math.OC) ,Applied Mathematics ,FOS: Mathematics ,Management Science and Operations Research ,Mathematics - Optimization and Control - Abstract
Recently, Yamanaka and Yamashita proposed the so-called positively homogeneous optimization problem, which includes many important problems, such as the absolute-value and the gauge optimizations. They presented a closed form of the dual formulation for the problem, and showed weak duality and the equivalence to the Lagrangian dual under some conditions. In this work, we focus on a special positively homogeneous optimization problem, whose objective function and constraints consist of some gauge and linear functions. We prove not only weak duality but also strong duality. We also study necessary and sufficient optimality conditions associated to the problem. Moreover, we give sufficient conditions under which we can recover a primal solution from a Karush-Kuhn-Tucker point of the dual formulation. Finally, we discuss how to extend the above results to general convex optimization problems by considering the so-called perspective functions., Comment: 24 pages
- Published
- 2022
3. On continuous selections of polynomial functions
- Author
-
Guo, Feng, Jiao, Liguo, and Kim, Do Sang
- Subjects
Control and Optimization ,Optimization and Control (math.OC) ,Applied Mathematics ,FOS: Mathematics ,Mathematics::Metric Geometry ,Management Science and Operations Research ,Mathematics - Optimization and Control - Abstract
A continuous selection of polynomial functions is a continuous function whose domain can be partitioned into finitely many pieces on which the function coincides with a polynomial. Given a set of finitely many polynomials, we show that there are only finitely many continuous selections of it and each one is semi-algebraic. Then, we establish some generic properties regarding the critical points, defined by the Clarke subdifferential, of these continuous selections. In particular, given a set of finitely many polynomials with generic coefficients, we show that the critical points of all continuous selections of it are finite and the critical values are all different, and we also derive the coercivity of those continuous selections which are bounded from below. We point out that some existing results about {\L}ojasiewicz's inequality and error bounds for the maximum function of some finitely many polynomials are also valid for all the continuous selections of them., Comment: 28 pages
- Published
- 2022
4. Integral Global Optimality Conditions and an Algorithm for Multiobjective Problems
- Author
-
Everton J. Silva, Elizabeth W. Karas, and Lucelina B. Santos
- Subjects
Control and Optimization ,Optimization and Control (math.OC) ,Computer Science::Neural and Evolutionary Computation ,Signal Processing ,FOS: Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Mathematics - Optimization and Control ,Analysis ,Computer Science Applications - Abstract
In this work, we propose integral global optimality conditions for multiobjective problems not necessarily differentiable. The integral characterization, already known for single objective problems, are extended to multiobjective problems by weighted sum and Chebyshev weighted scalarizations. Using this last scalarization, we propose an algorithm for obtaining an approximation of the weak Pareto front whose effectiveness is illustrated by solving a collection of multiobjective test problems.
- Published
- 2022
5. Necessary conditions for optimal control problems with sweeping systems and end point constraints
- Author
-
de Pinho, M. d. R., Ferreira, M. Margarida A., Smirnov, Georgi, and Universidade do Minho
- Subjects
optimal control ,Science & Technology ,Control and Optimization ,Optimization and Control (math.OC) ,Applied Mathematics ,Sweeping systems ,FOS: Mathematics ,necessary conditions ,Management Science and Operations Research ,Mathematics - Optimization and Control ,Ciências Naturais::Matemáticas - Abstract
We generalize the Maximum Principle for free end point optimal control problems involving sweeping systems derived in [de Pinho MdR, Ferreira MMA, Smirnov GV. Optimal control involving sweeping processes. Set-Valued Var Anal. 2019;27(2):523–548] to cover the case where the constraints are time dependent and the end point is constrained to a set. As in [de Pinho MdR, Ferreira MMA, Smirnov GV. Optimal control involving sweeping processes. Set-Valued Var Anal. 2019;27(2):523–548], an ingenious smooth approximating family of standard differential equations plays a crucial role., This work was supported by the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Funding UIDB/04650/2020. This work was also supported by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COM PETE 2020, INCO.2030, under the Portugal 2020 Partnership Agreement and by National Funds, Norte 2020, through CCDRN and FCT, within projects To Chair (POCI-01-0145-FEDER-028247), Upwind (PTDC/EEI-AUT/31447/2017 - POCI-01-0145-FEDER-031447) and Systec R&D unit (UIDB/00147/2020).
- Published
- 2022
6. Biobjective optimization problems on matroids with binary costs
- Author
-
Jochen Gorski, Kathrin Klamroth, and Julia Sudhoff
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,Optimization and Control (math.OC) ,Applied Mathematics ,Computer Science - Data Structures and Algorithms ,FOS: Mathematics ,Mathematics - Combinatorics ,Data Structures and Algorithms (cs.DS) ,Combinatorics (math.CO) ,Management Science and Operations Research ,Mathematics - Optimization and Control ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
Like most multiobjective combinatorial optimization problems, biobjective optimization problems on matroids are in general intractable and their corresponding decision problems are in general NP-hard. In this paper, we consider biobjective optimization problems on matroids where one of the objective functions is restricted to binary cost coefficients. We show that in this case the problem has a connected efficient set with respect to a natural definition of a neighborhood structure and hence, can be solved efficiently using a neighborhood search approach. This is, to the best of our knowledge, the first non-trivial problem on matroids where connectedness of the efficient set can be established. The theoretical results are validated by numerical experiments with biobjective minimum spanning tree problems (graphic matroids) and with biobjective knapsack problems with a cardinality constraint (uniform matroids). In the context of the minimum spanning tree problem, coloring all edges with cost 0 green and all edges with cost 1 red leads to an equivalent problem where we want to simultaneously minimize one general objective and the number of red edges (which defines the second objective) in a Pareto sense.
- Published
- 2022
7. Relaxed Lagrangian duality in convex infinite optimization: reducibility and strong duality
- Author
-
Dih, Nguyen, Goberna, Miguel A., L��pez, Marco A., Volle, Michel, Universidad de Alicante. Departamento de Matemáticas, and Laboratorio de Optimización (LOPT)
- Subjects
90C25, 49N15, 46N10 ,Control and Optimization ,Optimization and Control (math.OC) ,Estadística e Investigación Operativa ,Applied Mathematics ,Haar duality ,FOS: Mathematics ,Convex infinite programming ,Management Science and Operations Research ,Mathematics - Optimization and Control ,Lagrangian duality ,Reducibility - Abstract
We associate with each convex optimization problem, posed on some locally convex space, with infinitely many constraints indexed by the set T, and a given non-empty family H of finite subsets of T, a suitable Lagrangian-Haar dual problem. We obtain necessary and sufficient conditions for H-reducibility, that is, equivalence to some subproblem obtained by replacing the whole index set T by some element of H. Special attention is addressed to linear optimization, infinite and semi-infinite, and to convex problems with a countable family of constraints. Results on zero H-duality gap and on H-(stable) strong duality are provided. Examples are given along the paper to illustrate the meaning of the results., 23 pages and 1 figure
- Published
- 2022
8. Nonconvex flexible sparsity regularization: theory and monotone numerical schemes
- Author
-
Daria Ghilli, Dirk A. Lorenz, and Elena Resmerita
- Subjects
Control and Optimization ,49XX, 65KXX, 90CXX ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,020206 networking & telecommunications ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Mathematics - Analysis of PDEs ,Optimization and Control (math.OC) ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics - Numerical Analysis ,0101 mathematics ,Mathematics - Optimization and Control ,Analysis of PDEs (math.AP) - Abstract
Flexible sparsity regularization means stably approximating sparse solutions of operator equations by using coefficient-dependent penalizations. We propose and analyse a general nonconvex approach in this respect, from both theoretical and numerical perspectives. Namely, we show convergence of the regularization method and establish convergence properties of a couple of majorization approaches for the associated nonconvex problems. We also test a monotone algorithm for an academic example where the operator is an $M$ matrix, and on a time-dependent optimal control problem, pointing out the advantages of employing variable penalties over a fixed penalty.
- Published
- 2021
9. Accelerated differential inclusion for convex optimization
- Author
-
Hao Luo
- Subjects
Lyapunov function ,Work (thermodynamics) ,Control and Optimization ,37M15, 34E10, 90C25 ,Applied Mathematics ,Management Science and Operations Research ,symbols.namesake ,Differential inclusion ,Optimization and Control (math.OC) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Convergence (routing) ,Convex optimization ,FOS: Mathematics ,Trajectory ,symbols ,Applied mathematics ,Proximal Gradient Methods ,Exponential decay ,Mathematics - Optimization and Control ,Mathematics - Abstract
This work introduces a second-order differential inclusion for unconstrained convex optimization. In continuous level, solution existence in proper sense is obtained and exponential decay of a novel Lyapunov function along with the solution trajectory is derived as well. Then in discrete level, based on numerical discretizations of the continuous differential inclusion, both an inexact accelerated proximal point algorithm and an inexact accelerated proximal gradient method are proposed, and some new convergence rates are established via a discrete Lyapunov function.
- Published
- 2021
10. A Priori Error Analysis for an Optimal Control Problem Governed by a Variational Inequality of the Second Kind
- Author
-
Christian Meyer and Monika Weymuth
- Subjects
Control and Optimization ,Discretization ,49M25, 65G99, 65K15, 65N30 ,Numerical Analysis (math.NA) ,State (functional analysis) ,Optimal control ,Finite element method ,Computer Science Applications ,Optimization and Control (math.OC) ,Error analysis ,Signal Processing ,Variational inequality ,FOS: Mathematics ,A priori and a posteriori ,Applied mathematics ,Mathematics - Numerical Analysis ,Mathematics - Optimization and Control ,Analysis ,Mathematics - Abstract
We consider an optimal control problem governed by an elliptic variational inequality of the second kind. The problem is discretized by linear finite elements for the state and a variational discrete approach for the control. Based on a quadratic growth condition we derive nearly optimal a priori error estimates. Moreover, we establish second order sufficient optimality conditions that ensure a quadratic growth condition. These conditions are rather restrictive, but allow us to construct a one-dimensional locally optimal solution with reduced regularity, which serves as an exact solution for numerical experiments.
- Published
- 2021
11. Quasi-Newton methods for machine learning: forget the past, just sample
- Author
-
Peter Richtárik, Majid Jahani, Albert S. Berahas, and Martin Takáč
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Control and Optimization ,0211 other engineering and technologies ,Machine Learning (stat.ML) ,Sample (statistics) ,010103 numerical & computational mathematics ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Machine Learning (cs.LG) ,Statistics - Machine Learning ,FOS: Mathematics ,Empirical risk minimization ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,021103 operations research ,business.industry ,Applied Mathematics ,Deep learning ,Sampling (statistics) ,Optimization and Control (math.OC) ,Artificial intelligence ,business ,computer ,Software - Abstract
We present two sampled quasi-Newton methods (sampled LBFGS and sampled LSR1) for solving empirical risk minimization problems that arise in machine learning. Contrary to the classical variants of these methods that sequentially build Hessian or inverse Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate at every iteration to produce these approximations. As a result, the approximations constructed make use of more reliable (recent and local) information, and do not depend on past iterate information that could be significantly stale. Our proposed algorithms are efficient in terms of accessed data points (epochs) and have enough concurrency to take advantage of parallel/distributed computing environments. We provide convergence guarantees for our proposed methods. Numerical tests on a toy classification problem as well as on popular benchmarking binary classification and neural network training tasks reveal that the methods outperform their classical variants., Comment: 50 pages, 33 figures
- Published
- 2021
12. Revisiting jointly firmly nonexpansive families of mappings
- Author
-
Andrei Sipos
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,Mathematics - Logic ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010101 applied mathematics ,Algebra ,Optimization and Control (math.OC) ,Order (business) ,0C25, 46N10, 47J25, 47H09, 03F10 ,FOS: Mathematics ,0101 mathematics ,Logic (math.LO) ,Mathematics - Optimization and Control ,Mathematics ,Proof mining - Abstract
Recently, the author, together with L. Leustean and A. Nicolae, introduced the notion of jointly firmly nonexpansive families of mappings in order to investigate in an abstract manner the convergence of proximal methods. Here, we further the study of this concept, by giving a characterization in terms of the classical resolvent identity, by improving on the rate of convergence previously obtained for the uniform case, and by giving a treatment of the asymptotic behaviour at infinity of such families., arXiv admin note: text overlap with arXiv:2108.13994
- Published
- 2021
13. Necessary conditions for non-intersection of collections of sets
- Author
-
Hoa T. Bui and Alexander Y. Kruger
- Subjects
Discrete mathematics ,021103 operations research ,Control and Optimization ,Transversality ,Applied Mathematics ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010101 applied mathematics ,Intersection ,Optimization and Control (math.OC) ,FOS: Mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics - Abstract
This paper continues studies of non-intersection properties of finite collections of sets initiated 40 years ago by the extremal principle. We study elementary non-intersection properties of collections of sets, making the core of the conventional definitions of extremality and stationarity. In the setting of general Banach/Asplund spaces, we establish new primal (slope) and dual (generalized separation) necessary conditions for these non-intersection properties. The results are applied to convergence analysis of alternating projections., Comment: 26 pages
- Published
- 2021
14. Variational inequalities governed by strongly pseudomonotone operators
- Author
-
Pham Tien Kha and Pham Duy Khanh
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,Hilbert space ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010101 applied mathematics ,symbols.namesake ,Rate of convergence ,Optimization and Control (math.OC) ,Variational inequality ,Convergence (routing) ,FOS: Mathematics ,symbols ,Applied mathematics ,0101 mathematics ,Gradient projection ,Mathematics - Optimization and Control ,Global error ,Mathematics - Abstract
Qualitative and quantitative aspects for variational inequalities governed by strongly pseudomonotone operators on Hilbert space are investigated in this paper. First, we establish a global error bound for the solution set of the given problem with the residual function being the normal map. Second, we will prove that the iterative sequences generated by gradient projection method (GPM) with stepsizes forming a non-summable diminishing sequence of positive real numbers converge to the unique solution of the problem when the operator is bounded over the constraint set. Two counter-examples are given to show the necessity of the boundedness assumption and the variation of stepsizes. We also analyze the convergence rate of the iterative sequences generated by this method. Finally, we give an in-depth comparison between our algorithm and a recent related algorithm through several numerical experiments., 22 pages
- Published
- 2021
15. Gradient methods with memory
- Author
-
Nesterov, Yurii, Florea, Mihai I., UCL - SST/ICTM/INMA - Pôle en ingénierie mathématique, and UCL - SSH/LIDAM/CORE - Center for operations research and econometrics
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Computer science ,Applied Mathematics ,Computation ,0211 other engineering and technologies ,Linear model ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Oracle ,Optimization and Control (math.OC) ,Convex optimization ,Convergence (routing) ,FOS: Mathematics ,Optimization methods ,68Q25, 65Y20, 90C25 ,Differentiable function ,0101 mathematics ,Convex function ,Mathematics - Optimization and Control ,Software - Abstract
In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the information obtained at the previous iterations in order to accelerate the convergence towards the optimal solution. This information is used in the form of a piece-wise linear model of the objective function, which provides us with much better prediction abilities as compared with the standard linear model. To the best of our knowledge, this approach was never really applied in Convex Minimization to differentiable functions in view of the high complexity of the corresponding auxiliary problems. However, we show that all necessary computations can be done very efficiently. Consequently, we get new optimization methods, which are better than the usual Gradient Methods both in the number of oracle calls and in the computational time. Our theoretical conclusions are confirmed by preliminary computational experiments., Comment: This is an Accepted Manuscript of an article published by Taylor \& Francis in Optimization Methods and Software on 13 Jan 2021, available at https://www.tandfonline.com/doi/10.1080/10556788.2020.1858831
- Published
- 2021
16. Diminishing stepsize methods for nonconvex composite problems via ghost penalties: from the general to the convex regular constrained case
- Author
-
Francisco Facchinei, Gesualdo Scutari, Vyacheskav Kungurtsev, Lorenzo Lampariello, Facchinei, F., Kungurtsev, V., Lampariello, L., and Scutari, G.
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Composite optimization ,nonconvex optimization ,Applied Mathematics ,Composite number ,MathematicsofComputing_NUMERICALANALYSIS ,Mathematics::Optimization and Control ,0211 other engineering and technologies ,Constrained optimization ,Regular polygon ,020206 networking & telecommunications ,02 engineering and technology ,diminishing stepsize ,composite optimization ,Optimization and Control (math.OC) ,constrained optimization ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
In this paper we first extend the diminishing stepsize method for nonconvex constrained problems presented in [4] to deal with equality constraints and a nonsmooth objective function of composite type. We then consider the particular case in which the constraints are convex and satisfy a standard constraint qualification and show that in this setting the algorithm can be considerably simplified, reducing the computational burden of each iteration., arXiv admin note: text overlap with arXiv:1709.03384
- Published
- 2020
17. A fully stochastic second-order trust region method
- Author
-
Rui Shi and Frank E. Curtis
- Subjects
Trust region ,Mathematical optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,Data_MISCELLANEOUS ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Extension (predicate logic) ,01 natural sciences ,Optimization and Control (math.OC) ,Order (business) ,FOS: Mathematics ,Deep neural networks ,Stochastic optimization ,0101 mathematics ,Time series ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
A stochastic second-order trust region method is proposed, which can be viewed as a second-order extension of the trust-region-ish (TRish) algorithm proposed by Curtis et al. (INFORMS J. Optim. 1(3) 200-220, 2019). In each iteration, a search direction is computed by (approximately) solving a trust region subproblem defined by stochastic gradient and Hessian estimates. The algorithm has convergence guarantees for stochastic minimization in the fully stochastic regime, meaning that guarantees hold when each stochastic gradient is required merely to be an unbiased estimate of the true gradient with bounded variance and when the stochastic Hessian estimates are bounded uniformly in norm. The algorithm is also equipped with a worst-case complexity guarantee in the nearly deterministic regime, i.e., when the stochastic gradient and Hessian estimates are very close in expectation to the true gradients and Hessians. The results of numerical experiments for training convolutional neural networks for image classification and training a recurrent neural network for time series forecasting are presented. These results show that the algorithm can outperform a stochastic gradient approach and the first-order TRish algorithm in practice.
- Published
- 2020
18. Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- Author
-
Narin Petrot and Nimit Nimana
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,Regular polygon ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010101 applied mathematics ,Set (abstract data type) ,Optimization and Control (math.OC) ,Scheme (mathematics) ,Convex optimization ,FOS: Mathematics ,Order (group theory) ,Differentiable function ,0101 mathematics ,Convex conjugate ,Convex function ,Mathematics - Optimization and Control ,Mathematics - Abstract
We consider the problem of minimizing a finite sum of convex functions subject to the set of minimizers of a convex differentiable function. In order to solve the problem, an algorithm combining the incremental proximal gradient method with smooth penalization technique is proposed. We show the convergence of the generated sequence of iterates to an optimal solution of the optimization problems, provided that a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. Finally, the functionality of the method is illustrated by some numerical experiments addressing image inpainting problems and generalized Heron problems with least squares constraints.
- Published
- 2020
19. On basic operations related to network induction of discrete convex functions
- Author
-
Kazuo Murota
- Subjects
TheoryofComputation_MISCELLANEOUS ,Mathematical optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,MathematicsofComputing_GENERAL ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,0102 computer and information sciences ,02 engineering and technology ,90C27, 90C10, 90C25 ,01 natural sciences ,Optimization and Control (math.OC) ,010201 computation theory & mathematics ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,FOS: Mathematics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,Convex function ,Mathematics - Optimization and Control ,Game theory ,Software ,Mathematics - Abstract
Discrete convex functions are used in many areas, including operations research, discrete-event systems, game theory, and economics. The objective of this paper is to investigate basic operations such as direct sum, splitting, and aggregation that are related to network induction of discrete convex functions as well as discrete convex sets. Various kinds of discrete convex functions in discrete convex analysis are considered such as integrally convex functions, L-convex functions, M-convex functions, multimodular functions, and discrete midpoint convex functions., Comment: 42 pages. arXiv admin note: text overlap with arXiv:1907.09161
- Published
- 2020
20. A partitioned scheme for adjoint shape sensitivity analysis of fluid–structure interactions involving non-matching meshes
- Author
-
Reza Najian Asl, Ihar Antonau, Roland Wüchner, Kai-Uwe Bletzinger, Aditya Ghantasala, and Wulf G. Dettmer
- Subjects
Mathematics - Differential Geometry ,Work (thermodynamics) ,Control and Optimization ,Matching (graph theory) ,Applied Mathematics ,Structure (category theory) ,Numerical Analysis (math.NA) ,01 natural sciences ,010305 fluids & plasmas ,010101 applied mathematics ,Differential Geometry (math.DG) ,Optimization and Control (math.OC) ,Scheme (mathematics) ,0103 physical sciences ,Fluid–structure interaction ,FOS: Mathematics ,Applied mathematics ,Polygon mesh ,Mathematics - Numerical Analysis ,Sensitivity (control systems) ,0101 mathematics ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
This work presents a partitioned solution procedure to compute shape gradients in fluid-structure interaction (FSI) using black-box adjoint solvers. Special attention is paid to project the gradients onto the undeformed configuration. This is due to the mixed Lagrangian-Eulerian formulation of large-displacement FSI in this work. Adjoint FSI problem is partitioned as an assembly of well-known adjoint fluid and structural problems, without requiring expensive cross-derivatives. The sub-adjoint problems are coupled with each other by augmenting the target functions with auxiliary functions, independent of the concrete choice of the underlying adjoint formulations. The auxiliary functions are linear force-based or displacement-based functionals which are readily available in well-established single-disciplinary adjoint solvers. Adjoint structural displacements, adjoint fluid displacements, and domain-based adjoint sensitivities of the fluid are the coupling fields to be exchanged between the adjoint solvers. A reduced formulation is also derived for the case of boundary-based adjoint shape sensitivity analysis for fluids. Numerical studies show that the complete formulation computes accurate shape gradients whereas inaccuracies appear in the reduced gradients, specially in regions of strong flow gradients and near singularities. Nevertheless, reduced gradient formulations are found to be a compromise between computational costs and accuracy. Mapping techniques including nearest element interpolation and the mortar method are studied in computational adjoint FSI. It is numerically shown that the mortar method does not introduce spurious oscillations in primal and sensitivity fields along non-matching interfaces, unlike the nearest element interpolation.
- Published
- 2020
21. Projections onto the canonical simplex with additional linear inequalities
- Author
-
Lukáš Adam and Vaclav Macha
- Subjects
021103 operations research ,Control and Optimization ,Simplex ,Applied Mathematics ,0211 other engineering and technologies ,Robust optimization ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Linear inequality ,Projection (mathematics) ,Optimization and Control (math.OC) ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
We consider the distributionally robust optimization and show that computing the distributional worst-case is equivalent to computing the projection onto the canonical simplex with additional linear inequality. We consider several distance functions to measure the distance of distributions. We write the projections as optimization problems and show that they are equivalent to finding a zero of real-valued functions. We prove that these functions possess nice properties such as monotonicity or convexity. We design optimization methods with guaranteed convergence and derive their theoretical complexity. We demonstrate that our methods have (almost) linear observed complexity.
- Published
- 2020
22. Qualitative properties of the minimum sum-of-squares clustering problem
- Author
-
Jen-Chih Yao, Nguyen Dong Yen, and Tran Hung Cuong
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,Explained sum of squares ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010101 applied mathematics ,Optimization and Control (math.OC) ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,Cluster analysis ,Mathematics - Optimization and Control ,Mathematics - Abstract
A series of basic qualitative properties of the minimum sum-of-squares clustering problem are established in this paper. Among other things, we clarify the solution existence, properties of the global solutions, characteristic properties of the local solutions, locally Lipschitz property of the optimal value function, locally upper Lipschitz property of the global solution map, and the Aubin property of the local solution map. We prove that the problem in question always has a global solution and, under a mild condition, the global solution set is finite and the components of each global solution can be computed by an explicit formula. Based on a newly introduced concept of nontrivial local solution, we get necessary conditions for a system of centroids to be a nontrivial local solution. Interestingly, we are able to show that these necessary conditions are also sufficient ones. Finally, it is proved that the optimal value function is locally Lipschitz, the global solution map is locally upper Lipschitz, and the local solution map has the Aubin property, provided that the original data points are pairwise distinct. Thanks to the obtained complete characterizations of the nontrivial local solutions, one can understand better the performance of not only the $k$-means algorithm, but also of other solution methods for the minimum sum-of-squares clustering problem.
- Published
- 2020
23. Gradient Projection and Conditional Gradient Methods for Constrained Nonconvex Minimization
- Author
-
Andrey Tremba, Boris T. Polyak, and Maxim Viktorovich Balashov
- Subjects
Control and Optimization ,Optimization problem ,010102 general mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,01 natural sciences ,Computer Science Applications ,law.invention ,010101 applied mathematics ,Optimization and Control (math.OC) ,law ,49J53, 90C26, 90C52 ,Signal Processing ,FOS: Mathematics ,Applied mathematics ,Minification ,0101 mathematics ,Gradient projection ,Mathematics - Optimization and Control ,Manifold (fluid mechanics) ,Analysis ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Minimization of a smooth function on a sphere or, more generally, on a smooth manifold, is the simplest non-convex optimization problem. It has a lot of applications. Our goal is to propose a version of the gradient projection algorithm for its solution and to obtain results that guarantee convergence of the algorithm under some minimal natural assumptions. We use the Lezanski-Polyak-Lojasiewicz condition on a manifold to prove the global linear convergence of the algorithm. Another method well fitted for the problem is the conditional gradient (Frank-Wolfe) algorithm. We examine some conditions which guarantee global convergence of full-step version of the method with linear rate.
- Published
- 2020
24. A generic coordinate descent solver for non-smooth convex optimisation
- Author
-
Olivier Fercoq, Laboratoire Traitement et Communication de l'Information (LTCI), Institut Mines-Télécom [Paris] (IMT)-Télécom Paris, Signal, Statistique et Apprentissage (S2A), and Institut Mines-Télécom [Paris] (IMT)-Télécom Paris-Institut Mines-Télécom [Paris] (IMT)-Télécom Paris
- Subjects
Coordinate descent ,Mathematical optimization ,convex optimization ,021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,Regular polygon ,010103 numerical & computational mathematics ,02 engineering and technology ,Solver ,Non smooth ,01 natural sciences ,Minimisation (clinical trials) ,efficient implementation ,Optimization and Control (math.OC) ,FOS: Mathematics ,generic solver ,[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC] ,0101 mathematics ,Mathematics - Optimization and Control ,Computer Science::Databases ,Software ,Mathematics - Abstract
International audience; We present a generic coordinate descent solver for the minimization of a nonsmooth convex objective with structure. The method can deal in particular with problems with linear constraints. The implementation makes use of efficient residual updates and automatically determines which dual variables should be duplicated. A list of basic functional atoms is pre-compiled for efficiency and a modelling language in Python allows the user to combine them at run time. So, the algorithm can be used to solve a large variety of problems including Lasso, sparse multinomial logistic regression, linear and quadratic programs.
- Published
- 2019
25. Stochastic polynomial optimization
- Author
-
Jiawang Nie, Liu Yang, and Suhan Zhong
- Subjects
Polynomial ,021103 operations research ,Control and Optimization ,Applied Mathematics ,Mathematics::Optimization and Control ,0211 other engineering and technologies ,Sample (statistics) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Optimization and Control (math.OC) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,FOS: Mathematics ,Polynomial optimization ,Applied mathematics ,Stochastic optimization ,0101 mathematics ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
This paper studies stochastic optimization problems with polynomials. We propose an optimization model with sample averages and perturbations. The Lasserre type Moment-SOS relaxations are used to solve the sample average optimization. Properties of the optimization and its relaxations are studied. Numerical experiments are presented., Comment: 17 pages
- Published
- 2019
26. An accelerated communication-efficient primal-dual optimization framework for structured machine learning
- Author
-
Nathan Srebro, Frank E. Curtis, Martin Jaggi, Martin Takáč, and Chenxin Ma
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Machine learning ,computer.software_genre ,nonlinear optimization ,01 natural sciences ,Machine Learning (cs.LG) ,Nonlinear programming ,accelerated methods ,FOS: Mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,021103 operations research ,Optimization algorithm ,business.industry ,Applied Mathematics ,nonsmooth optimization ,Primal dual ,Computer Science - Learning ,machine learning ,Optimization and Control (math.OC) ,Artificial intelligence ,business ,distributed optimization ,computer ,Software - Abstract
Distributed optimization algorithms are essential for training machine learning models on very large-scale datasets. However, they often suffer from communication bottlenecks. Confronting this issue, a communication-efficient primal-dual coordinate ascent framework (CoCoA) and its improved variant CoCoA+ have been proposed, achieving a convergence rate of $\mathcal{O}(1/t)$ for solving empirical risk minimization problems with Lipschitz continuous losses. In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of $\mathcal{O}(1/t^2)$ in terms of reducing suboptimality. The analysis of this rate is also notable in that the convergence rate bounds involve constants that, except in extreme cases, are significantly reduced compared to those previously provided for CoCoA+. The results of numerical experiments are provided to show that acceleration can lead to significant performance gains.
- Published
- 2019
27. Computational performance of a projection and rescaling algorithm
- Author
-
Negar Soheili and Javier Peña
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Relative interior ,Optimization and Control (math.OC) ,FOS: Mathematics ,0101 mathematics ,Projection (set theory) ,Mathematics - Optimization and Control ,Algorithm ,Software ,Mathematics - Abstract
This paper documents a computational implementation of a {\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \[ \text{find} \; x\in L\cap\mathbb{R}^n_{+} \;\;\;\; \text{ and } \; \;\;\;\; \text{find} \; \hat x\in L^\perp\cap\mathbb{R}^n_{+}, \] where $L$ denotes a linear subspace in $\mathbb{R}^n$ and $L^\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\em basic procedure} involving only low-cost operations with a periodic {\em rescaling step.} We give a full description of a MATLAB implementation of this algorithm and present multiple sets of numerical experiments on synthetic problem instances with varied levels of conditioning. Our computational experiments provide promising evidence of the effectiveness of the projection and rescaling algorithm. Our MATLAB code is publicly available. Furthermore, the simplicity of the algorithm makes a computational implementation in other environments completely straightforward., Comment: 19 pages
- Published
- 2019
28. A representation of generalized convex polyhedra and applications
- Author
-
Nguyen Ngoc Luan and Nguyen Dong Yen
- Subjects
021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,Regular polygon ,Representation (systemics) ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010101 applied mathematics ,Combinatorics ,Polyhedron ,Optimization and Control (math.OC) ,49N10, 90C05, 90C29, 90C48 ,Convex polytope ,FOS: Mathematics ,Mathematics::Metric Geometry ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics - Abstract
It is well known that finite-dimensional polyhedral convex sets can be generated by finitely many points and finitely many directions. Representation formulas in this spirit are obtained for convex polyhedra and generalized convex polyhedra in locally convex Hausdorff topological vector spaces. Our results develop those of X. Y. Zheng (Set-Valued Anal., Vol. 17, 2009, 389-408), which were established in a Banach space setting. Applications of the representation formulas to proving solution existence theorems for generalized linear programming problems and generalized linear vector optimization problems are shown.
- Published
- 2019
29. Strong calmness of perturbed KKT system for a class of conic programming with degenerate solutions
- Author
-
Shaohua Pan and Yulan Liu
- Subjects
Control and Optimization ,Karush–Kuhn–Tucker conditions ,Mathematics::Optimization and Control ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Physics::Geophysics ,symbols.namesake ,Convergence (routing) ,FOS: Mathematics ,Applied mathematics ,49K40, 90C31, 49J53 ,0101 mathematics ,Mathematics - Optimization and Control ,Calmness ,Mathematics ,021103 operations research ,Applied Mathematics ,Stationary point ,010101 applied mathematics ,Relative interior ,Optimization and Control (math.OC) ,Lagrange multiplier ,symbols ,Multiplier (economics) ,Conic optimization - Abstract
This paper is concerned with the strong calmness of the KKT solution mapping for a class of canonically perturbed conic programming, which plays a central role in achieving fast convergence under situations when the Lagrange multiplier associated to a solution of these conic optimization problems is not unique. We show that the strong calmness of the KKT solution mapping is equivalent to a local error bound for solutions of perturbed KKT system, and is also equivalent to the pseudo-isolated calmness of the stationary point mapping along with the calmness of the multiplier set map at the corresponding reference point. Sufficient conditions are also provided for the strong calmness by establishing the pseudo-isolated calmness of the stationary point mapping in terms of the noncriticality of the associated multiplier, and the calmness of the multiplier set mapping in terms of a relative interior condition for the multiplier set. These results cover and extend the existing ones in \cite{Hager99,Izmailov12} for nonlinear programming and in \cite{Cui16,Zhang17} for semidefinite programming., 21 pages
- Published
- 2019
30. Random Function Iterations for Consistent Stochastic Feasibility
- Author
-
Neal Hermer, Anja Sturm, and D. Russell Luke
- Subjects
021103 operations research ,Control and Optimization ,Markov chain ,010102 general mathematics ,0211 other engineering and technologies ,Random function ,02 engineering and technology ,Fixed point ,01 natural sciences ,Computer Science Applications ,Optimization and Control (math.OC) ,Iterated function ,Signal Processing ,Convergence (routing) ,FOS: Mathematics ,60J05, 52A22, 49J55 (Primary) 49J53, 65K05 (Secondary) ,Applied mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Projection algorithms ,Analysis ,Mathematics - Abstract
We study the convergence of stochastic fixed point iterations in the consistent case (in the sense of Butnariu and Fl{\aa}m (1995)) in several different settings, under decreasingly restrictive regularity assumptions of the fixed point mappings. The iterations are Markov chains and, for the purposes of this study, convergence is understood in very restrictive terms. We show that sufficient conditions for geometric (linear) convergence in expectation of stochastic projection algorithms presented in Nedi\'c (2011), are in fact necessary for geometric (linear) convergence in expectation more generally of iterated random functions., Comment: 29 pages, 4 figures
- Published
- 2019
31. Re-examination of Bregman functions and new properties of their divergences
- Author
-
Daniel Reem, Alvaro R. De Pierro, and Simeon Reich
- Subjects
FOS: Computer and information sciences ,Computer Science::Machine Learning ,J.2 ,Control and Optimization ,Computer Science - Information Theory ,G.1.6 ,0211 other engineering and technologies ,FOS: Physical sciences ,Machine Learning (stat.ML) ,02 engineering and technology ,Management Science and Operations Research ,Bregman divergence ,01 natural sciences ,Measure (mathematics) ,Statistics::Machine Learning ,Statistics - Machine Learning ,FOS: Mathematics ,H.1.1 ,Applied mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,021103 operations research ,Information Theory (cs.IT) ,Applied Mathematics ,Probability and statistics ,Function (mathematics) ,Gauge (firearms) ,010101 applied mathematics ,Optimization and Control (math.OC) ,Physics - Data Analysis, Statistics and Probability ,52A41, 52B55, 46N10, 90C25, 90C30, 46T99, 47N10, 49M37, 26B25, 58C05 ,Convex function ,Data Analysis, Statistics and Probability (physics.data-an) - Abstract
The Bregman divergence (Bregman distance, Bregman measure of distance) is a certain useful substitute for a distance, obtained from a well-chosen function (the "Bregman function"). Bregman functions and divergences have been extensively investigated during the last decades and have found applications in optimization, operations research, information theory, nonlinear analysis, machine learning and more. This paper re-examines various aspects related to the theory of Bregman functions and divergences. In particular, it presents many sufficient conditions which allow the construction of Bregman functions in a general setting and introduces new Bregman functions (such as a negative iterated log entropy). Moreover, it sheds new light on several known Bregman functions such as quadratic entropies, the negative Havrda-Charv\'at-Tsallis entropy, and the negative Boltzmann-Gibbs-Shannon entropy, and it shows that the negative Burg entropy, which is not a Bregman function according to the classical theory but nevertheless is known to have "Bregmanian properties", can, by our re-examination of the theory, be considered as a Bregman function. Our analysis yields several by-products of independent interest such as the introduction of the concept of relative uniform convexity (a certain generalization of uniform convexity), new properties of uniformly and strongly convex functions, and results in Banach space theory., Comment: Correction of a few very minor inaccuracies; added journal details (volume, page numbers, etc.); some references were updated
- Published
- 2018
32. Simplified versions of the conditional gradient method
- Author
-
Igor Konnov
- Subjects
021103 operations research ,Control and Optimization ,Optimization problem ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,90C25, 90C30 ,Frank–Wolfe algorithm ,Optimization and Control (math.OC) ,Simple (abstract algebra) ,Convergence (routing) ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics - Abstract
We suggest simple modifications of the conditional gradient method for smooth optimization problems, which maintain the basic convergence properties, but reduce the implementation cost of each iteration essentially. Namely, we propose the step-size procedure without any line-search, and inexact solution of the direction finding subproblem. Preliminary results of computational tests confirm efficiency of the proposed modifications., 20 pages
- Published
- 2018
33. A Dynamical Approach to Two-Block Separable Convex Optimization Problems with Linear Constraints
- Author
-
Sandy Bitterlich, Ernö Robert Csetnek, and Gert Wanka
- Subjects
Control and Optimization ,Lyapunov analysis ,Duality (optimization) ,Subderivative ,Dynamical system ,01 natural sciences ,dynamical system ,Separable space ,37N40, 49N15, 90C25, 90C46 ,Block (programming) ,Saddle point ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,Proximal AMA ,Mathematics - Optimization and Control ,Lagrangian ,Mathematics ,structured convex minimization ,010102 general mathematics ,subdifferential ,saddle points ,Computer Science Applications ,Convex optimization ,010101 applied mathematics ,Optimization and Control (math.OC) ,Ordinary differential equation ,primal dual algorithm ,Signal Processing ,duality ,Analysis - Abstract
The aim of this manuscript is to approach by means of first order differential equations/inclusions convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. Each block of the objective contains a further smooth convex function. We investigate the dynamical system proposed and prove that its trajectories asymptotically converge to a saddle point of the Lagrangian of the convex optimization problem. Time discretization of the dynamical system leads to the alternating minimization algorithm AMA and also to its proximal variant recently introduced in the literature., Comment: 30 pages, 2 figures
- Published
- 2021
34. On inexact solution of auxiliary problems in tensor methods for convex optimization
- Author
-
Grapiglia, G.N., Nesterov, Yu., UCL - SSH/LIDAM/CORE - Center for operations research and econometrics, and UCL - SST/ICTM/INMA - Pôle en ingénierie mathématique
- Subjects
Control and Optimization ,0211 other engineering and technologies ,Hölder condition ,010103 numerical & computational mathematics ,02 engineering and technology ,Type (model theory) ,tensor methods ,01 natural sciences ,Tensor (intrinsic definition) ,higher-order methods ,FOS: Mathematics ,Applied mathematics ,49M15, 49M37, 58C15, 90C25, 90C30 ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,worsot-case global complexity bounds ,021103 operations research ,Applied Mathematics ,Optimization and Control (math.OC) ,Convex optimization ,Minification ,Convex function ,Software ,unconstrained minimization - Abstract
In this paper we study the auxiliary problems that appear in $p$-order tensor methods for unconstrained minimization of convex functions with $\nu$-H\"{o}lder continuous $p$th derivatives. This type of auxiliary problems corresponds to the minimization of a $(p+\nu)$-order regularization of the $p$th order Taylor approximation of the objective. For the case $p=3$, we consider the use of Gradient Methods with Bregman distance. When the regularization parameter is sufficiently large, we prove that the referred methods take at most $\mathcal{O}(\log(\epsilon^{-1}))$ iterations to find either a suitable approximate stationary point of the tensor model or an $\epsilon$-approximate stationary point of the original objective function.
- Published
- 2020
35. A universal modification of the linear coupling method
- Author
-
Alexander Gasnikov, Anton Anikin, Sergey Guminov, and Alexander Gornov
- Subjects
Mathematical optimization ,Control and Optimization ,Line search ,Smoothness (probability theory) ,Degree (graph theory) ,Applied Mathematics ,010102 general mathematics ,01 natural sciences ,90C25, 68Q25 ,010101 applied mathematics ,Optimization and Control (math.OC) ,Convex optimization ,FOS: Mathematics ,Auxiliary line ,A priori and a posteriori ,0101 mathematics ,Parametric family ,Mathematics - Optimization and Control ,Gradient method ,Software ,Mathematics - Abstract
In the late sixties, N. Shor and B. Polyak independently proposed optimal first-order methods for non-smooth convex optimization problems. In 1982 A. Nemirovski proposed optimal first-order methods for smooth convex optimization problems, which utilized auxiliary line search. In 1985 A. Nemirovski and Yu. Nesterov proposed a parametric family of optimal first-order methods for convex optimization problems with intermediate smoothness. In 2013 Yu. Nesterov proposed a universal gradient method which combined all the good properties of the previous methods, except the possibility of using auxiliary line search. One can typically observe that in practice auxiliary line search improves performance for many tasks. In this paper, we propose the apparently first such method of non-smooth convex optimization allowing for the use of the line search procedure. Moreover, it is based on the universal gradient method, which does not require any a priori information about the actual degree of smoothness of the problem. Numerical experiments demonstrate that the proposed method is, in some cases, considerably faster than Nesterov's universal gradient method.
- Published
- 2018
36. Optimal time delays in a class of reaction-diffusion equations
- Author
-
Eduardo Casas, Fredi Tröltzsch, and Mariano Mateos
- Subjects
Time delays ,Class (set theory) ,021103 operations research ,Control and Optimization ,Applied Mathematics ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Time optimal ,01 natural sciences ,010101 applied mathematics ,State function ,49K20, 39M05, 35K58 ,Optimization and Control (math.OC) ,Reaction–diffusion system ,FOS: Mathematics ,Multiple time ,Applied mathematics ,Numerical tests ,Differentiable function ,0101 mathematics ,Mathematics - Optimization and Control ,Hardware_LOGICDESIGN ,Mathematics - Abstract
A class of semilinear parabolic reaction diffusion equations with multiple time delays is considered. These time delays and corresponding weights are to be optimized such that the associated solution of the delay equation is the best approximation of a desired state function. The differentiability of the mapping is proved that associates the solution of the delay equation to the vector of weights and delays. Based on an adjoint calculus, first-order necessary optimality conditions are derived. Numerical test examples show the applicability of the concept of optimizing time delays., Comment: 17 pages, 2 figures
- Published
- 2018
37. A non-intrusive parallel-in-time approach for simultaneous optimization with unsteady PDEs
- Author
-
Stefanie Günther, Nicolas R. Gauger, and Jacob B. Schroder
- Subjects
Control and Optimization ,Partial differential equation ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,Supercomputer ,01 natural sciences ,010101 applied mathematics ,Optimization and Control (math.OC) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,Simultaneous optimization ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
This paper presents a non-intrusive framework for integrating existing unsteady partial differential equation (PDE) solvers into a parallel-in-time simultaneous optimization algorithm. The time-parallelization is provided by the non-intrusive software library XBraid, which applies an iterative multigrid reduction technique to the time domain of existing time-marching schemes for solving unsteady PDEs. Its general user-interface has been extended for computing adjoint sensitivities such that gradients of output quantities with respect to design changes can be computed parallel-in-time alongside with the primal PDE solution. In this paper, the primal and adjoint XBraid iterations are embedded into a simultaneous optimization framework, namely the One-shot method. In this method, design updates towards optimality are employed after each state and adjoint update such that optimality and feasibility of the design and the PDE solution are reached simultaneously. The time-parallel optimization method is validated on an advection-dominated flow control problem which shows significant speedup over a classical time-serial optimization algorithm.
- Published
- 2018
38. Solving parameter estimation problems with discrete adjoint exponential integrators
- Author
-
Mahesh Narayanamurthi, Adrian Sandu, and Ulrich Römer
- Subjects
Control and Optimization ,Automatic differentiation ,Iterative method ,MathematicsofComputing_NUMERICALANALYSIS ,Context (language use) ,010103 numerical & computational mathematics ,Exponential integrator ,01 natural sciences ,symbols.namesake ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,Partial differential equation ,Applied Mathematics ,Numerical Analysis (math.NA) ,Inverse problem ,34H05, 34K29, 34K35 ,Exponential function ,010101 applied mathematics ,Optimization and Control (math.OC) ,Jacobian matrix and determinant ,symbols ,Software - Abstract
The solution of inverse problems in a variational setting finds best estimates of the model parameters by minimizing a cost function that penalizes the mismatch between model outputs and observations. The gradients required by the numerical optimization process are computed using adjoint models. Exponential integrators are a promising family of time discretizations for evolutionary partial differential equations. In order to allow the use of these discretizations in the context of inverse problems adjoints of exponential integrators are required. This work derives the discrete adjoint formulae for a W-type exponential propagation iterative methods of Runge-Kutta type (EPIRK-W). These methods allow arbitrary approximations of the Jacobian while maintaining the overall accuracy of the forward integration. The use of Jacobian approximation matrices that do not depend on the model state avoids the complex calculation of Hessians in the discrete adjoint formulae, and allows efficient adjoint code generation via algorithmic differentiation. We use the discrete EPIRK-W adjoints to solve inverse problems with the Lorenz-96 model and a computational magnetics benchmark test. Numerical results validate our theoretical derivations.
- Published
- 2018
39. Two stochastic optimization algorithms for convex optimization with fixed point constraints
- Author
-
Hideaki Iiduka
- Subjects
Mathematical optimization ,021103 operations research ,Control and Optimization ,Optimization algorithm ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Fixed point ,01 natural sciences ,Stochastic programming ,Constraint (information theory) ,65K05, 90C15, 90C25 ,Optimization and Control (math.OC) ,Convex optimization ,FOS: Mathematics ,Stochastic optimization ,0101 mathematics ,Stochastic gradient method ,Convex function ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
Two optimization algorithms are proposed for solving a stochastic programming problem for which the objective function is given in the form of the expectation of convex functions and the constraint set is defined by the intersection of fixed point sets of nonexpansive mappings in a real Hilbert space. This setting of fixed point constraints enables consideration of the case in which the projection onto each of the constraint sets cannot be computed efficiently. Both algorithms use a convex function and a nonexpansive mapping determined by a certain probabilistic process at each iteration. One algorithm blends a stochastic gradient method with the Halpern fixed point algorithm. The other is based on a stochastic proximal point algorithm and the Halpern fixed point algorithm; it can be applied to nonsmooth convex optimization. Convergence analysis showed that, under certain assumptions, any weak sequential cluster point of the sequence generated by either algorithm almost surely belongs to the solution set of the problem. Convergence rate analysis illustrated their efficiency, and the numerical results of convex optimization over fixed point sets demonstrated their effectiveness.
- Published
- 2018
40. Gradient methods with regularization for constrained optimization problems and their complexity estimates
- Author
-
Igor Konnov
- Subjects
TheoryofComputation_MISCELLANEOUS ,Mathematical optimization ,Control and Optimization ,Weak convergence ,G.1.6 ,Applied Mathematics ,Management Science and Operations Research ,Regularization (mathematics) ,90C25, 65K05, 65J20 ,Frank–Wolfe algorithm ,Rate of convergence ,Optimization and Control (math.OC) ,Convex optimization ,FOS: Mathematics ,Proximal gradient methods for learning ,Convergence tests ,Proximal Gradient Methods ,Mathematics - Optimization and Control ,Mathematics - Abstract
We suggest simple implementable modifications of conditional gradient and gradient projection methods for smooth convex optimization problems in Hilbert spaces. Usually, the custom methods attain only weak convergence. We prove strong convergence of the new versions and establish their complexity estimates, which appear similar to the convergence rate of the weakly convergent versions., 17 pages
- Published
- 2017
41. Performance bounds for Nash equilibria in submodular utility systems with user groups
- Author
-
Ali Pezeshki, Yajing Liu, and Edwin K. P. Chong
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Mathematical optimization ,Control and Optimization ,Computer Networks and Communications ,Computer science ,02 engineering and technology ,Type (model theory) ,Submodular set function ,Social group ,symbols.namesake ,020901 industrial engineering & automation ,Computer Science - Computer Science and Game Theory ,Artificial Intelligence ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics - Optimization and Control ,Group (mathematics) ,TheoryofComputation_GENERAL ,020206 networking & telecommunications ,Function (mathematics) ,Term (time) ,Human-Computer Interaction ,Interpersonal ties ,Optimization and Control (math.OC) ,Control and Systems Engineering ,Nash equilibrium ,Signal Processing ,symbols ,Computer Science and Game Theory (cs.GT) ,Information Systems - Abstract
In this paper, we consider variations of the utility system considered by Vetta, in which users are grouped together. Our aim is to establish how grouping and cooperation among users affect performance bounds. We consider two types of grouping. The first type is from \cite{Zhang2014}, where each user belongs to a group of users having social ties with it. For this type of utility system, each user's strategy maximizes its social group utility function, giving rise to the notion of \emph{social-aware Nash equilibrium}. We prove that this social utility system yields to the bounding results of Vetta for non-cooperative system, thus establishing provable performance guarantees for the social-aware Nash equilibrium. For the second type of grouping, the set of users is partitioned into $l$ disjoint groups, where the users within a group cooperate to maximize their group utility function, giving rise to the notion of \emph{group Nash equilibrium}. In this case, each group can be viewed as a new user with vector-valued actions, and a 1/2 bound for the performance of group Nash equilibrium follows from the result of Vetta. But as we show tighter bounds involving curvature can be established. By defining the group curvature $c_{k_i}$ associated with group $i$ with $k_i$ users, we show that if the social utility function is nondecreasing and submodular, then any group Nash equilibrium achieves at least $1/(1+\max_{1\leq i\leq l}c_{k_i})$ of the optimal social utility, which is tighter than that for the case without grouping. As a special case, if each user has the same action space, then we have that any group Nash equilibrium achieves at least $1/(1+c_{k^*})$ of the optimal social utility, where $k^*$ is the least number of users among the $l$ groups. Finally, we present an example of a utility system for database assisted spectrum access to illustrate our results., Comment: This paper was accepted by Journal of Control and Decision
- Published
- 2017
42. A unifying theory of exactness of linear penalty functions II: parametric penalty functions
- Author
-
M.V. Dolgopolik
- Subjects
Class (set theory) ,021103 operations research ,Control and Optimization ,Duality gap ,Applied Mathematics ,0211 other engineering and technologies ,Zero (complex analysis) ,010103 numerical & computational mathematics ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Stationary point ,65K05, 90C30 ,Optimization and Control (math.OC) ,Convergence (routing) ,FOS: Mathematics ,Applied mathematics ,Penalty method ,0101 mathematics ,Mathematics - Optimization and Control ,Smoothing ,Parametric statistics ,Mathematics - Abstract
In this article we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth and exact unlike the standard (i.e. non-parametric) exact penalty functions that are always nonsmooth. We obtain several necessary and/or sufficient conditions for the exactness of parametric penalty functions, and for the zero duality gap property to hold true for these functions. We also prove some convergence results for the method of parametric penalty functions, and derive necessary and sufficient conditions for a parametric penalty function to not have any stationary points outside the set of feasible points of the constrained optimization problem under consideration. In the second part of the paper, we apply the general theory of exact parametric penalty functions to a class of parametric penalty functions introduced by Huyer and Neumaier, and to smoothing approximations of nonsmooth exact penalty functions. The general approach adopted in this article allowed us to unify and significantly sharpen many existing results on parametric penalty functions., This is a slightly edited version of Accepted Manuscript of an article published by Taylor & Francis in Optimization on 06/07/2017
- Published
- 2017
43. The pseudomonotone polar for multivalued operators
- Author
-
John Cotrina and Orestes Bueno
- Subjects
Work (thermodynamics) ,Pure mathematics ,021103 operations research ,Control and Optimization ,Polarity (physics) ,47H04, 47H05, 49J53 ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Management Science and Operations Research ,Characterization (mathematics) ,01 natural sciences ,Monotone polygon ,Optimization and Control (math.OC) ,Variational inequality ,FOS: Mathematics ,Polar ,Point (geometry) ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics - Abstract
In this work, we study the pseudomonotonicity of multivalued operators from the point of view of polarity, in an analogous way as the well-known monotone polar due to Mart\'inez-Legaz and Svaiter, and the quasimonotone polar recently introduced by Bueno and Cotrina. We show that this new polar, adapted for pseudomonotonicity, possesses analogous properties to the monotone and quasimonotone polar, among which are a characterization of pseudomonotonicity, maximality and pre-maximality. Furthermore, we characterize the notion of $D$-maximal pseudomonotonicity introduced by Hadjisavvas. We conclude this work studying the connections between pseudomonotonicity and variational inequality problems., Comment: 14 pages
- Published
- 2017
44. A priori stopping rule for an iterative Bregman method for optimal control problems
- Author
-
Frank Pörner
- Subjects
Pointwise ,Control and Optimization ,Optimization problem ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Stopping rule ,49M15, 49N45, 65K10 ,010103 numerical & computational mathematics ,Bregman divergence ,Optimal control ,01 natural sciences ,Regularization (mathematics) ,010101 applied mathematics ,Bregman method ,Optimization and Control (math.OC) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,FOS: Mathematics ,Applied mathematics ,A priori and a posteriori ,0101 mathematics ,Mathematics - Optimization and Control ,Software ,Mathematics - Abstract
In this article we continue our investigation of the iterative regularization method for optimization problems based on Bregman distances. The optimization problems are subject to pointwise inequality constraints in $L^2(\Omega)$. We provide an estimate for the noise error for perturbed data, which can be used to construct an a priori stopping rule. Furthermore we show how to implement our method with a semi-smooth Newton method using finite elements and present numerical results for the stopping rule.
- Published
- 2017
45. High-order adaptive Gegenbauer integral spectral element method for solving non-linear optimal control problems
- Author
-
Kareem T. Elgindy
- Subjects
Smoothness ,Control and Optimization ,Gegenbauer polynomials ,Adaptive algorithm ,Discretization ,Applied Mathematics ,Spectral element method ,010103 numerical & computational mathematics ,Management Science and Operations Research ,Optimal control ,01 natural sciences ,010101 applied mathematics ,Nonlinear system ,Optimization and Control (math.OC) ,FOS: Mathematics ,Orthogonal collocation ,Applied mathematics ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics - Abstract
In this work, we propose an adaptive spectral element algorithm for solving nonlinear optimal control problems. The method employs orthogonal collocation at the shifted Gegenbauer-Gauss points combined with very accurate and stable numerical quadratures to fully discretize the multiple-phase integral form of the optimal control problem. The proposed algorithm relies on exploiting the underlying smoothness properties of the solutions for computing approximate solutions efficiently. In particular, the method brackets discontinuities and "points of nonsmoothness" through a novel local adaptive algorithm, which achieves a desired accuracy on the discrete dynamical system equations by adjusting both the mesh size and the degree of the approximating polynomials. A rigorous error analysis of the developed numerical quadratures is presented. Finally, the efficiency of the proposed method is demonstrated on three test examples from the open literature., 24 pages, 18 figures
- Published
- 2017
46. Multi-haul quasi network flow model for vertical alignment optimization
- Author
-
Vahid Beiranvand, Warren Hare, Yves Lucet, and Shahadat Hossain
- Subjects
90C90 (Primary), 90C11 (Secondary) ,Mathematical optimization ,Control and Optimization ,Optimization problem ,Computer science ,G.1.6 ,Reliability (computer networking) ,0211 other engineering and technologies ,I.6.4 ,J.6 ,02 engineering and technology ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Field (computer science) ,Approximation error ,0502 economics and business ,FOS: Mathematics ,Mathematics - Optimization and Control ,Integer programming ,050210 logistics & transportation ,021103 operations research ,Applied Mathematics ,05 social sciences ,Flow network ,Graph ,Computer Science Applications ,Vertical alignment ,Optimization and Control (math.OC) ,Graph (abstract data type) - Abstract
The vertical alignment optimization problem for road design aims to generate a vertical alignment of a new road with a minimum cost, while satisfying safety and design constraints. We present a new model called multi-haul quasi network flow (MH-QNF) for vertical alignment optimization that improves the accuracy and reliability of previous mixed integer linear programming models. We evaluate the performance of the new model compared to two state-of-the-art models in the field: the complete transportation graph (CTG) and the quasi network flow (QNF) models. The numerical results show that, within a 1% relative error, the proposed model is robust and solves more than 93% of test problems compared to 82% for the CTG and none for the QNF. Moreover, the MH-QNF model solves the problems approximately 8 times faster than the CTG model., Comment: 23 pages, 4 figures The Version of Record of this manuscript has been published and is available in Engineering Optimization (GENO); Estimated publication date: Jan 12, 2017 (online); http://dx.doi.org/10.1080/0305215X.2016.1271880
- Published
- 2017
47. Outer approximation methods for solving variational inequalities in Hilbert space
- Author
-
Rafał Zalas, Simeon Reich, and Aviv Gibali
- Subjects
Sequence ,021103 operations research ,Control and Optimization ,Applied Mathematics ,010102 general mathematics ,0211 other engineering and technologies ,Convex set ,Hilbert space ,02 engineering and technology ,Management Science and Operations Research ,Fixed point ,Strongly monotone ,Lipschitz continuity ,01 natural sciences ,symbols.namesake ,Optimization and Control (math.OC) ,Variational inequality ,FOS: Mathematics ,symbols ,Applied mathematics ,47H09, 47H10, 47J20, 47J25, 65K15 ,0101 mathematics ,Mathematics - Optimization and Control ,Finite set ,Mathematics - Abstract
In this paper we study variational inequalities in a real Hilbert space, which are governed by a strongly monotone and Lipschitz continuous operator $F$ over a closed and convex set $C$. We assume that the set $C$ can be outerly approximated by the fixed point sets of a sequence of certain quasi-nonexpansive operators called cutters. We propose an iterative method the main idea of which is to project at each step onto a particular half-space constructed by using the input data. Our approach is based on a method presented by Fukushima in 1986, which has recently been extended by several authors. In the present paper we establish strong convergence in Hilbert space. We emphasize that to the best of our knowledge, Fukushima's method has so far been considered only in the Euclidean setting with different conditions on $F$. We provide several examples for the case where $C$ is the common fixed point set of a finite number of cutters with numerical illustrations of our theoretical results.
- Published
- 2017
48. Convergence analysis of a proximal point algorithm for minimizing differences of functions
- Author
-
Nguyen Mau Nam and Nguyen Thai An
- Subjects
Class (set theory) ,021103 operations research ,Control and Optimization ,Optimization problem ,Property (programming) ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Function (mathematics) ,Management Science and Operations Research ,01 natural sciences ,Convexity ,Optimization and Control (math.OC) ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Convergence (routing) ,Convex optimization ,FOS: Mathematics ,0101 mathematics ,Convex function ,Mathematics - Optimization and Control ,Algorithm ,Mathematics - Abstract
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–ᴌojasiewicz property.
- Published
- 2016
49. Functional A Posteriori Error Estimates for Time-Periodic Parabolic Optimal Control Problems
- Author
-
Sergey Repin, Ulrich Langer, and Monika Wolfmayr
- Subjects
Mathematical optimization ,Control and Optimization ,MathematicsofComputing_NUMERICALANALYSIS ,Finite element approximations ,010103 numerical & computational mathematics ,Type (model theory) ,01 natural sciences ,parabolic time-periodic optimal control problems ,Error analysis ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,Numerical tests ,functional a posteriori error estimates ,0101 mathematics ,Mathematics - Optimization and Control ,49N20, 35Q61, 65M60, 65F08 ,Mathematics ,ta113 ,Time periodic ,ta111 ,Numerical Analysis (math.NA) ,State (functional analysis) ,Optimal control ,Computer Science Applications ,010101 applied mathematics ,Optimization and Control (math.OC) ,multiharmonic finite element methods ,Signal Processing ,A priori and a posteriori ,Analysis - Abstract
This article is devoted to the a posteriori error analysis of multiharmonic finite element approximations to distributed optimal control problems with time-periodic state equations of parabolic type. We derive a posteriori estimates of the functional type, which are easily computable and provide guaranteed upper bounds for the state and co-state errors as well as for the cost functional. These theoretical results are confirmed by several numerical tests that show high efficiency of the a posteriori error bounds. peerReviewed
- Published
- 2016
50. Weak minimizers, minimizers and variational inequalities for set-valued functions. A blooming wreath?
- Author
-
Carola Schrage and Giovanni Paolo Crespi
- Subjects
Control and Optimization ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Type (model theory) ,Characterization (mathematics) ,01 natural sciences ,Set (abstract data type) ,symbols.namesake ,FOS: Mathematics ,Applied mathematics ,Set optimization, variational Inequalities, Dini derivatives ,0101 mathematics ,Special case ,Mathematics - Optimization and Control ,Set optimization ,Mathematics ,021103 operations research ,Dini derivatives ,Applied Mathematics ,010102 general mathematics ,Regular polygon ,variational Inequalities ,Dini derivative ,Vector optimization ,Optimization and Control (math.OC) ,Variational inequality ,symbols - Abstract
In the literature, necessary and sufficient conditions in terms of variational inequalities are introduced to characterize minimizers of convex set valued functions with values in a conlinear space. Similar results are proved for a weaker concept of minimizers and weaker variational inequalities. The implications are proved using scalarization techniques that eventually provide original problems, not fully equivalent to the set-valued counterparts. Therefore, we try, in the course of this note, to close the network among the various notions proposed. More specifically, we prove that a minimizer is always a weak minimizer, and a solution to the stronger variational inequality always also a solution to the weak variational inequality of the same type. As a special case we obtain a complete characterization of efficiency and weak efficiency in vector optimization by set-valued variational inequalities and their scalarizations. Indeed this might eventually prove the usefulness of the set-optimization approach to renew the study of vector optimization.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.