12 results on '"Emre Mengi"'
Search Results
2. Nonsmooth algorithms for minimizing the largest eigenvalue with applications to inner numerical radius
- Author
-
Fatih Kangal, Emre Mengi, Mengi, Emre (ORCID 0000-0003-0788-0066 & YÖK ID 113760), Kangal, Fatih, College of Sciences, Graduate School of Sciences and Engineering, and Department of Mathematics
- Subjects
Applied Mathematics ,General Mathematics ,Mathematical analysis ,020206 networking & telecommunications ,010103 numerical & computational mathematics ,02 engineering and technology ,Radius ,01 natural sciences ,Computational Mathematics ,Eigenvalue optimization ,Nonsmooth optimization ,Global optimization ,Rate of convergence ,Subspace projections ,Field of values ,Definite matrix pairs ,Inner numerical radius ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Mathematics ,Eigenvalues and eigenvectors - Abstract
Nonsmoothness at optimal points is a common phenomenon in many eigenvalue optimization problems. We consider two recent algorithms to minimize the largest eigenvalue of a Hermitian matrix dependent on one parameter, both proven to be globally convergent unaffected by nonsmoothness. One of these algorithms models the eigenvalue function with a piece-wise quadratic function and is effective in dealing with nonconvex problems. The other algorithm projects the Hermitian matrix into subspaces formed of eigenvectors and is effective in dealing with large-scale problems. We generalize the latter slightly to cope with nonsmoothness. For both algorithms we analyze the rate of convergence in the nonsmooth setting, when the largest eigenvalue is multiple at the minimizer and zero is strictly in the interior of the generalized Clarke derivative, and prove that both algorithms converge rapidly. The algorithms are applied to, and the deduced results are illustrated on the computation of the inner numerical radius, the modulus of the point on the boundary of the field of values closest to the origin, which carries significance for instance for the numerical solution of a symmetric definite generalized eigenvalue problem and the iterative solution of a saddle point linear system., NA
- Published
- 2019
3. Approximation of stability radii for large-scale dissipative Hamiltonian systems
- Author
-
Emre Mengi, Nicat Aliyev, Volker Mehrmann, Mengi, Emre (ORCID 0000-0003-0788-0066 & YÖK ID 113760), Aliyev, Nicat, Mehrmann, Volker, College of Sciences, and Department of Department of Mathematics
- Subjects
Lyapunov function ,0209 industrial biotechnology ,Applied Mathematics ,010103 numerical & computational mathematics ,02 engineering and technology ,Positive-definite matrix ,01 natural sciences ,Hermitian matrix ,Hamiltonian system ,Dissipative hamiltonian system ,Robust stability ,Stability radius ,Eigenvalue optimization ,Subspace projection ,Structure-preserving subspace framework ,Hermite interpolation ,Computational Mathematics ,symbols.namesake ,020901 industrial engineering & automation ,Stability theory ,symbols ,Dissipative system ,0101 mathematics ,ddc:510 ,Hamiltonian (quantum mechanics) ,Mathematics, applied ,Mathematics ,Mathematical physics - Abstract
A linear time-invariant dissipative Hamiltonian (DH) system (x) over dot = (J-R)Qx, with a skew-Hermitian J, a Hermitian positive semidefinite R, and a Hermitian positive definite Q, is always Lyapunov stable and under further weak conditions even asymptotically stable. By exploiting the characterizations from Mehl et al. (SIAM J. Matrix Anal. Appl. 37(4), 1625-1654, 2016), we focus on the estimation of two stability radii for large-scale DH systems, one with respect to non-Hermitian perturbations of R in the form R + B Delta C-H for given matrices B, C, and another with respect to Hermitian perturbations in the form R + B Delta B-H, Delta = Delta(H). We propose subspace frameworks for both stability radii that converge at a superlinear rate in theory. The one for the non-Hermitian stability radius benefits from the DH structure-preserving model order reduction techniques, whereas for the Hermitian stability radius we derive subspaces yielding a Hermite interpolation property between the full and projected problems. With the proposed frameworks, we are able to estimate the two stability radii accurately and efficiently for large-scale systems which include a finite-element model of an industrial disk brake., Deutsche Forschungsgemeinschaft, Project ME 40-1 of Priority Program 1897, Calm, Smooth and Smart; Deutsche Forschungsgemeinschaft, Project A02 of Sonderforschungsbereich 910
- Published
- 2020
4. Large-Scale Computation of $\mathcal{L}_\infty$-Norms by a Greedy Subspace Method
- Author
-
Nicat Aliyev, Peter Benner, Paul Schwerdtner, Emre Mengi, and Matthias Voigt
- Subjects
Discrete mathematics ,0209 industrial biotechnology ,Dimension (graph theory) ,Inverse ,34K17, 65D05, 65F15, 90C06, 90C26, 93D03 ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Linear subspace ,Combinatorics ,Singular value ,Projection (relational algebra) ,020901 industrial engineering & automation ,Hermite interpolation ,FOS: Mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Analysis ,Subspace topology ,Meromorphic function ,Mathematics - Abstract
We are concerned with the computation of the ${\mathcal L}_\infty$-norm for an ${\mathcal L}_\infty$-function of the form $H(s) = C(s) D(s)^{-1} B(s)$, where the middle factor is the inverse of a meromorphic matrix-valued function, and $C(s),\, B(s)$ are meromorphic functions mapping to short-and-fat and tall-and-skinny matrices, respectively. For instance, transfer functions of descriptor systems and delay systems fall into this family. We focus on the case where the middle factor is large-scale. We propose a subspace projection method to obtain approximations of the function $H$ where the middle factor is of much smaller dimension. The ${\mathcal L}_\infty$-norms are computed for the resulting reduced functions, then the subspaces are refined by means of the optimal points on the imaginary axis where the ${\mathcal L}_\infty$-norm of the reduced function is attained. The subspace method is designed so that certain Hermite interpolation properties hold between the largest singular values of the original and reduced functions. This leads to a locally superlinearly convergent algorithm with respect to the subspace dimension, which we prove and illustrate on various numerical examples., Comment: 23 pages, 3 figures
- Published
- 2017
5. Subspace Methods For Three-Parameter Eigenvalue Problems
- Author
-
Karl Meerbergen, Michiel E. Hochstenbach, Bor Plestenjak, Emre Mengi, Center for Analysis, Scientific Computing & Appl., Mengi, Emre (ORCID 0000-0003-0788-0066 & YÖK ID 113760), Hochstenbach, Michiel E., Meerbergen, Karl, Plestenjak, Bor, College of Sciences, and Department of Department of Mathematics
- Subjects
Algebra and Number Theory ,Helmholtz equation ,multiparameter eigenvalue problem ,Iterative method ,Arnoldi method ,Baer wave equation ,Ellipsoidal wave equation ,Jacobi-Davidson method ,Multiparameter eigenvalue problem ,Tensor ,Applied Mathematics ,Separation of variables ,ellipsoidal wave equation ,010103 numerical & computational mathematics ,01 natural sciences ,Linear subspace ,Ellipsoid ,tensor ,010101 applied mathematics ,Applied mathematics ,Boundary value problem ,0101 mathematics ,Jacobi–Davidson method ,Mathematics ,Subspace topology ,Eigenvalues and eigenvectors - Abstract
We propose subspace methods for three-parameter eigenvalue problems. Such problems arise when separation of variables is applied to separable boundary value problems; a particular example is the Helmholtz equation in ellipsoidal and paraboloidal coordinates. While several subspace methods for two-parameter eigenvalue problems exist, their extensions to a three-parameter setting seem challenging. An inherent difficulty is that, while for two-parameter eigenvalue problems, we can exploit a relation to Sylvester equations to obtain a fast Arnoldi-type method, such a relation does not seem to exist when there are three or more parameters. Instead, we introduce a subspace iteration method with projections onto generalized Krylov subspaces that are constructed from scratch at every iteration using certain Ritz vectors as the initial vectors. Another possibility is a Jacobi-Davidson-type method for three or more parameters, which we generalize from its two-parameter counterpart. For both approaches, we introduce a selection criterion for deflation that is based on the angles between left and right eigenvectors. The Jacobi-Davidson approach is devised to locate eigenvalues close to a prescribed target; yet, it often also performs well when eigenvalues are sought based on the proximity of one of the components to a prescribed target. The subspace iteration method is devised specifically for the latter task. The proposed approaches are suitable especially for problems where the computation of several eigenvalues is required with high accuracy. MATLAB implementations of both methods have been made available in the package MultiParEig (see https://www.mathworks.com/matlabcentral/fileexchange/47844-multipareig)., Scientific and Technological Research Council of Turkey (TÜBİTAK); Slovenian Research Agency; Slovenia and Turkey bilateral project; NWO Vidi research grant
- Published
- 2019
6. Large-Scale and Global Maximization of the Distance to Instability
- Author
-
Emre Mengi, Mengi, Emre (ORCID 0000-0003-0788-0066 & YÖK ID 113760), College of Sciences, and Department of Department of Mathematics
- Subjects
0209 industrial biotechnology ,Scale (ratio) ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,02 engineering and technology ,Maximization ,Dynamical system ,01 natural sciences ,Instability ,Matrix (mathematics) ,020901 industrial engineering & automation ,65F15, 90C26, 93D09, 93D15, 49K35 ,Eigenvalue optimization ,Maximin optimization ,Distance to instability ,Robust stability ,Subspace framework ,Large-scale optimization ,Global optimization ,Eigenvalue perturbation theory ,FOS: Mathematics ,Statistical physics ,Mathematics - Numerical Analysis ,0101 mathematics ,Mathematics ,Analysis - Abstract
The larger the distance to instability from a matrix is, the more robustly stable the associated autonomous dynamical system is in the presence of uncertainties and typically the less severe transient behavior its solution exhibits. Motivated by these issues, we consider the maximization of the distance to instability of a matrix dependent on several parameters, a nonconvex optimization problem that is likely to be nonsmooth. In the first part we propose a globally convergent algorithm when the matrix is of small size and depends on a few parameters. In the second part we deal with the problems involving large matrices. We tailor a subspace framework that reduces the size of the matrix drastically. The strength of the tailored subspace framework is proven with a global convergence result as the subspaces grow and a superlinear rate-of-convergence result with respect to the subspace dimension., 36 pages, 6 figures
- Published
- 2018
7. A Support Function Based Algorithm For Optimization With Eigenvalue Constraints
- Author
-
Emre Mengi
- Subjects
Inverse iteration ,021103 operations research ,Mathematical analysis ,0211 other engineering and technologies ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,02 engineering and technology ,Support function ,Function (mathematics) ,01 natural sciences ,Theoretical Computer Science ,Rate of convergence ,Fixed-point iteration ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0101 mathematics ,Divide-and-conquer eigenvalue algorithm ,Convex function ,Algorithm ,Software ,Eigenvalues and eigenvectors ,Mathematics - Abstract
Optimization of convex functions subject to eigenvalue constraints is intriguing because of peculiar analytical properties of eigenvalue functions and is of practical interest because of a wide range of applications in fields such as structural design and control theory. Here we focus on the optimization of a linear objective subject to a constraint on the smallest eigenvalue of an analytic and Hermitian matrix-valued function. We propose a numerical approach based on quadratic support functions that overestimate the smallest eigenvalue function globally. The quadratic support functions are derived by employing variational properties of the smallest eigenvalue function over a set of Hermitian matrices. We establish the local convergence of the algorithm under mild assumptions and deduce a precise rate of convergence result by viewing the algorithm as a fixed point iteration. The convergence analysis reveals that the algorithm is immune to the nonsmooth nature of the smallest eigenvalue. We illustrate the practical applicability of the algorithm on the pseudospectral functions.
- Published
- 2017
8. Computation Of Pseudospectral Abscissa For Large-Scale Nonlinear Eigenvalue Problems
- Author
-
Wim Michiels, Roel Van Beeumen, Karl Meerbergen, Emre Mengi, Mengi, Emre (ORCID 0000-0003-0788-0066 & YÖK ID 113760), Meerbergen, Karl, Michiels, Wim, Van Beeumen, Roel, College of Sciences, and Department of Department of Mathematics
- Subjects
Mathematical optimization ,General Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Numerical & Computational Mathematics ,010103 numerical & computational mathematics ,01 natural sciences ,eigenvalue perturbation theory ,symbols.namesake ,nonlinear eigenvalue problem ,Applied mathematics ,0101 mathematics ,Eigenvalues and eigenvectors ,Mathematics ,Pseudospectrum ,Numerical and Computational Mathematics ,global optimization ,Applied Mathematics ,Abscissa ,Pseudospectra ,Nonlinear eigenvalue problem ,Eigenvalue perturbation theory ,Nonsmooth optimization ,Subspace methods ,Global optimization ,subspace methods ,nonsmooth optimization ,010101 applied mathematics ,pseudospectra ,Computational Mathematics ,Singular value ,Gauss pseudospectral method ,Chebyshev pseudospectral method ,symbols ,Pseudospectral optimal control ,Eigenvalue perturbation - Abstract
We present an algorithm to compute the pseudospectral abscissa for a nonlinear eigenvalue problem. The algorithm relies on global under-estimator and over-estimator functions for the eigenvalue and singular value functions involved. These global models follow from eigenvalue perturbation theory. The algorithm has three particular features. First, it converges to the globally rightmost point of the pseudospectrum, and it is immune to nonsmoothness. The global convergence assertion is under the assumption that a global lower bound is available for the second derivative of a singular value function depending on one parameter. It may not be easy to deduce such a lower bound analytically, but assigning large negative values works robustly in practice. Second, it is applicable to large-scale problems since the dominant cost per iteration stems from computing the smallest singular value and associated singular vectors, for which efficient iterative solvers can be used. Furthermore, a significant increase in computational efficiency can be obtained by subspace acceleration, that is, by restricting the domains of the linear maps associated with the matrices involved to small but suitable subspaces, and solving the resulting reduced problems. Occasional restarts of these subspaces further enhance the efficiency for large-scale problems. Finally, in contrast to existing iterative approaches based on constructing low-rank perturbations and rightmost eigenvalue computations, the algorithm relies on computing only singular values of complex matrices. Hence, the algorithm does not require solutions of nonlinear eigenvalue problems, thereby further increasing efficiency and reliability. This work is accompanied by a robust implementation of the algorithm that is publicly available., The Optimization in Engineering Center of the KU Leuven; Research Foundation-Flanders (FWO); Research Executive Agency of the European Union; Scientific and Technological Research Council of Turkey (TÜBİTAK); BAGEP Program of Science Academy
- Published
- 2017
9. A Greedy Subspace Method for Computing the ℒ∞-Norm
- Author
-
Peter Benner, Emre Mengi, Paul Schwerdtner, Matthias Voigt, and Nicat Aliyev
- Subjects
Discrete mathematics ,0209 industrial biotechnology ,020901 industrial engineering & automation ,Norm (mathematics) ,010103 numerical & computational mathematics ,02 engineering and technology ,0101 mathematics ,01 natural sciences ,Subspace topology ,Mathematics - Published
- 2017
10. Erratum to: 'Computation of pseudospectral abscissa for large-scale nonlinear eigenvalue problems'
- Author
-
Roel Van Beeumen, Emre Mengi, Wim Michiels, and Karl Meerbergen
- Subjects
Scale (ratio) ,Applied Mathematics ,General Mathematics ,Computation ,010102 general mathematics ,Abscissa ,010103 numerical & computational mathematics ,01 natural sciences ,Computational Mathematics ,symbols.namesake ,Nonlinear system ,symbols ,Applied mathematics ,0101 mathematics ,Eigenvalues and eigenvectors ,Mathematics - Published
- 2017
11. Matrix Polynomials with Specified Eigenvalues
- Author
-
Michael Karow and Emre Mengi
- Subjects
Polynomial ,Optimization problem ,0211 other engineering and technologies ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Matrix polynomial ,Matrix (mathematics) ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,FOS: Mathematics ,Discrete Mathematics and Combinatorics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Eigenvalues and eigenvectors ,Mathematics ,Numerical Analysis ,Algebra and Number Theory ,65F15, 65F18, 47A56 ,021107 urban & regional planning ,Numerical Analysis (math.NA) ,Singular value ,Geometry and Topology ,Sylvester equation ,Complex plane - Abstract
This work concerns the distance in 2-norm from a matrix polynomial to a nearest polynomial with a specified number of its eigenvalues at specified locations in the complex plane. Perturbations are allowed only on the constant coefficient matrix. Singular value optimization formulas are derived for these distances facilitating their computation. The singular value optimization problems, when the number of specified eigenvalues is small, can be solved numerically by exploiting the Lipschitzness and piece-wise analyticity of the singular values with respect to the parameters., Comment: 19 pages, 2 figures
- Published
- 2012
- Full Text
- View/download PDF
12. A Subspace Method for Large-Scale Eigenvalue Optimization
- Author
-
Wim Michiels, Emre Mengi, Fatih Kangal, Karl Meerbergen, Kangal, Fatih, Mengi, Emre (ORCID 0000-0003-0788-0066 & YÖK ID 113760), Meerbergen, Karl, Michiels, Wim, College of Sciences, Graduate School of Sciences and Engineering, and Department of Department of Mathematics
- Subjects
Scale (ratio) ,Orthographic projection ,010103 numerical & computational mathematics ,Function (mathematics) ,Compact operator ,01 natural sciences ,Hermitian matrix ,Linear subspace ,010101 applied mathematics ,Eigenvalue optimization ,Large scale ,Orthogonal projection ,Eigenvalue perturbation theory ,Parameter dependent compact operator ,Matrix-valued function ,Applied mathematics ,0101 mathematics ,Analysis ,Subspace topology ,Eigenvalues and eigenvectors ,Mathematics, applied ,Mathematics - Abstract
We consider the minimization or maximization of the Jth largest eigenvalue of an analytic and Hermitian matrix-valued function, and build on Mengi, Yildirim, and Kilic [SIAM T. Matrix Anal. Appl., 35, pp. 699-724, 2014]. This work addresses the setting when the matrix-valued function involved is very large. We describe subspace procedures that convert the original problem into a small-scale one by means of orthogonal projections and restrictions to certain subspaces, and that gradually expand these subspaces based on the optimal solutions of small-scale problems. Global convergence and superlinear rate-of-convergence results with respect to the dimensions of the subspaces are presented in the infinite dimensional setting, where the matrix-valued function is replaced by a compact operator depending on parameters. In practice, it suffices to solve eigenvalue optimization problems involving matrices with sizes on the scale of tens, instead of the original problem involving matrices with sizes on the scale of thousands., OPTEC; Optimization in Engineering Center of KU Leuven; Research Foundation Flanders; project UCoCoS; European Union; European Commision; Scientific and Technological Research Council of Turkey (TÜBİTAK) - FWO (Scientific and Technological Research Council of Turkey (TÜBİTAK) - Belgian Research Foundation, Flanders); BAGEP program of The Science Academy of Turkey
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.