32 results on '"Hermitian matrix"'
Search Results
2. Efficient Reduction of Banded Hermitian Positive Definite Generalized Eigenvalue Problems to Banded Standard Eigenvalue Problems
- Author
-
Bruno Lang
- Subjects
Reduction (complexity) ,Computational Mathematics ,Pure mathematics ,Applied Mathematics ,Positive-definite matrix ,Mathematics::Spectral Theory ,Performance model ,Hermitian matrix ,Eigenvalues and eigenvectors ,Eigendecomposition of a matrix ,Mathematics - Abstract
We present a method for reducing the generalized eigenvalue problem $A x = B x \lambda$ with banded hermitian matrices $A$, $B$, and $B$ positive definite to an equivalent standard eigenvalue probl...
- Published
- 2019
3. Fast Computation of Spectral Densities for Generalized Eigenvalue Problems
- Author
-
Ruipeng Li, Yousef Saad, and Yuanzhe Xi
- Subjects
Chebyshev polynomials ,Applied Mathematics ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,Positive-definite matrix ,01 natural sciences ,Hermitian matrix ,Computational Mathematics ,Lanczos resampling ,symbols.namesake ,Matrix (mathematics) ,0103 physical sciences ,FOS: Mathematics ,Matrix pencil ,symbols ,Applied mathematics ,Gaussian quadrature ,Mathematics - Numerical Analysis ,0101 mathematics ,010306 general physics ,Eigenvalues and eigenvectors ,Mathematics - Abstract
The distribution of the eigenvalues of a Hermitian matrix (or of a Hermitian matrix pencil) reveals important features of the underlying problem, whether a Hamiltonian system in physics, or a social network in behavioral sciences. However, computing all the eigenvalues explicitly is prohibitively expensive for real-world applications. This paper presents two types of methods to efficiently estimate the spectral density of a matrix pencil $(A, B)$ when both $A$ and $B$ are Hermitian and, in addition, $B$ is positive definite. The first one is based on the Kernel Polynomial Method (KPM) and the second on Gaussian quadrature by the Lanczos procedure. By employing Chebyshev polynomial approximation techniques, we can avoid direct factorizations in both methods, making the resulting algorithms suitable for large matrices. Under some assumptions, we prove bounds that suggest that the Lanczos method converges twice as fast as the KPM method. Numerical examples further indicate that the Lanczos method can provide more accurate spectral densities when the eigenvalue distribution is highly non-uniform. As an application, we show how to use the computed spectral density to partition the spectrum into intervals that contain roughly the same number of eigenvalues. This procedure, which makes it possible to compute the spectrum by parts, is a key ingredient in the new breed of eigensolvers that exploit "spectrum slicing".
- Published
- 2018
4. Solving PhaseLift by Low-Rank Riemannian Optimization Methods for Complex Semidefinite Constraints
- Author
-
Xiangxiong Zhang, Kyle A. Gallivan, and Wen Huang
- Subjects
Mathematical optimization ,021103 operations research ,Rank (linear algebra) ,Applied Mathematics ,0211 other engineering and technologies ,010103 numerical & computational mathematics ,02 engineering and technology ,Function (mathematics) ,Positive-definite matrix ,01 natural sciences ,Stationary point ,Hermitian matrix ,Set (abstract data type) ,Computational Mathematics ,Effective method ,0101 mathematics ,Phase retrieval ,Mathematics - Abstract
A framework, PhaseLift, was recently proposed to solve the phase retrieval problem. In this framework, the problem is solved by optimizing a cost function over the set of complex Hermitian positive semidefinite matrices. This approach to phase retrieval motivates a more general consideration of optimizing cost functions on semidefinite Hermitian matrices where the desired minimizers are known to have low rank. This paper considers an approach based on an alternative cost function defined on a union of appropriate manifolds. It is related to the original cost function in a manner that preserves the ability to find a global minimizer and is significantly more efficient computationally. A rank-based optimality condition for stationary points is given and optimization algorithms based on state-of-the-art Riemannian optimization and dynamically reducing rank are proposed. Empirical evaluations are performed using the PhaseLift problem. The new approach is shown to be an effective method of phase retrieval with...
- Published
- 2017
5. OPTIMAL PARAMETER IN HERMITIAN AND SKEW-HERMITIAN SPLITTING METHOD FOR CERTAIN TWO-BY-TWO BLOCK MATRICES.
- Author
-
Zhong-Zhi Bai, Golub, Gene H., and Chi-Kwong Li
- Subjects
- *
HERMITIAN operators , *ITERATIVE methods (Mathematics) , *LINEAR systems , *NUMERICAL analysis , *CONJUGATE gradient methods , *INFINITE matrices , *SYMMETRIC matrices - Abstract
The optimal parameter of the Hermitian/skew-Hermitian splitting (HSS) iteration method for a real two-by-two linear system is obtained. The result is used to determine the optimal parameters for linear systems associated with certain two-by-two block matrices and to estimate the optimal parameters of the HSS iteration method for linear systems with n-by-n real coefficient matrices. Numerical examples are given to illustrate the results. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
6. Computing Partial Spectra with Least-Squares Rational Filters
- Author
-
Yuanzhe Xi and Yousef Saad
- Subjects
Mathematical optimization ,Iterative method ,Applied Mathematics ,Linear system ,010103 numerical & computational mathematics ,Filter (signal processing) ,01 natural sciences ,Least squares ,Hermitian matrix ,010101 applied mathematics ,Computational Mathematics ,Factorization ,Step function ,0101 mathematics ,Algorithm ,Subspace topology ,Mathematics - Abstract
We present a method for computing partial spectra of Hermitian matrices, based on a combination of subspace iteration with rational filtering. In contrast with classical rational filters derived from Cauchy integrals or from uniform approximations to a step function, we adopt a least-squares (LS) viewpoint for designing filters. One of the goals of the proposed approach is to build a filter that will lead to linear systems that are easier to solve by iterative methods. Among the main advantages of the proposed LS filters is their flexibility. Thus, we can place poles in more general locations than with the formulations mentioned above, and we can also repeat these poles a few times for better efficiency. This leads to a smaller number of required poles than in existing methods. As a consequence, factorization costs are reduced when direct solvers are used and the scheme is also beneficial for iterative solvers. The paper discusses iterative schemes to solve the linear systems resulting from the filtered s...
- Published
- 2016
7. A Thick-Restart Lanczos Algorithm with Polynomial Filtering for Hermitian Eigenvalue Problems
- Author
-
Ruipeng Li, Yousef Saad, Chao Yang, Yuanzhe Xi, and Eugene Vecharynski
- Subjects
Polynomial ,Applied Mathematics ,Spectrum (functional analysis) ,MathematicsofComputing_NUMERICALANALYSIS ,Lanczos algorithm ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,Interval (mathematics) ,Filter (signal processing) ,01 natural sciences ,Hermitian matrix ,010101 applied mathematics ,Computational Mathematics ,Matrix (mathematics) ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Eigenvalues and eigenvectors ,Mathematics - Abstract
Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a Thick-Restart version of the Lanczos algorithm with deflation (`locking') and a new type of polynomial filters obtained from a least-squares technique. The resulting algorithm can be utilized in a `spectrum-slicing' approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different sub-intervals independently from one another.
- Published
- 2016
8. Generalized Preconditioned Locally Harmonic Residual Method for Non-Hermitian Eigenproblems
- Author
-
Eugene Vecharynski, Fei Xue, and Chao Yang
- Subjects
010304 chemical physics ,Preconditioner ,Applied Mathematics ,Mathematical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Harmonic (mathematics) ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,Residual ,01 natural sciences ,Hermitian matrix ,Computational Mathematics ,Residual method ,Transformation (function) ,0103 physical sciences ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Eigenvalues and eigenvectors ,Mathematics ,Block (data storage) - Abstract
We introduce the Generalized Preconditioned Locally Harmonic Residual (GPLHR) method for solving standard and generalized non-Hermitian eigenproblems. The method is particularly useful for computing a subset of eigenvalues, and their eigen- or Schur vectors, closest to a given shift. The proposed method is based on block iterations and can take advantage of a preconditioner if it is available. It does not need to perform exact shift-and-invert transformation. Standard and generalized eigenproblems are handled in a unified framework. Our numerical experiments demonstrate that GPLHR is generally more robust and efficient than existing methods, especially if the available memory is limited.
- Published
- 2016
9. Preconditioned Eigensolvers for Large-Scale Nonlinear Hermitian Eigenproblems with Variational Characterizations. II. Interior Eigenvalues
- Author
-
Eugene Vecharynski, Daniel B. Szyld, and Fei Xue
- Subjects
Applied Mathematics ,Mathematical analysis ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,Type (model theory) ,Computer Science::Numerical Analysis ,01 natural sciences ,Linear subspace ,Hermitian matrix ,Projection (linear algebra) ,010101 applied mathematics ,Computational Mathematics ,Variational principle ,Conjugate gradient method ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Algebraic number ,Eigenvalues and eigenvectors ,Mathematics - Abstract
We consider the solution of large-scale nonlinear algebraic Hermitian eigenproblems of the form $T(\lambda)v=0$ that admit a variational characterization of eigenvalues. These problems arise in a variety of applications and are generalizations of linear Hermitian eigenproblems $Av\!=\!\lambda Bv$. In this paper, we propose a preconditioned locally minimal residual (PLMR) method for efficiently computing interior eigenvalues of problems of this type. We discuss the development of search subspaces, preconditioning, and eigenpair extraction procedures based on the refined Rayleigh--Ritz projection. Extension to the block methods is presented, and a moving-window-style soft deflation is described. Numerical experiments demonstrate that PLMR methods provide a rapid and robust convergence toward interior eigenvalues. The approach is also shown to be efficient and reliable for computing a large number of extreme eigenvalues, dramatically outperforming standard preconditioned conjugate gradient methods.
- Published
- 2015
10. Weighted Toeplitz Regularized Least Squares Computation for Image Restoration
- Author
-
Jianyu Pan and Michael K. Ng
- Subjects
Preconditioner ,Applied Mathematics ,Mathematical analysis ,Linear system ,Krylov subspace ,Computer Science::Numerical Analysis ,Hermitian matrix ,Regularization (mathematics) ,Toeplitz matrix ,Mathematics::Numerical Analysis ,Computational Mathematics ,Applied mathematics ,Image restoration ,Eigenvalues and eigenvectors ,Mathematics - Abstract
The main aim of this paper is to develop a fast algorithm for solving weighted Toeplitz regularized least squares problems arising from image restoration. Based on augmented system formulation, we develop new Hermitian and skew-Hermitian splitting (HSS) preconditioners for solving such linear systems. The advantage of the proposed preconditioner is that the blurring matrix, weighting matrix, and regularization matrix can be decoupled such that the resulting preconditioner is not expensive to use. We show that for a preconditioned system that is derived from a saddle point structure of size $(m+n)\times(m+n)$, the preconditioned matrix has an eigenvalue at 1 with multiplicity $n$ and the other $m$ eigenvalues of the form $1 - \lambda$ with $|\lambda| < 1$. We also study how to choose the HSS parameter to minimize the magnitude of $\lambda$, and therefore the Krylov subspace method applied to solving the preconditioned system converges very quickly. Experimental results for image restoration problems are re...
- Published
- 2014
11. A Plane-Wave Least-Squares Method for Time-Harmonic Maxwell's Equations in Absorbing Media
- Author
-
Qiya Hu and Long Yuan
- Subjects
Discretization ,Preconditioner ,Applied Mathematics ,Numerical analysis ,Mathematical analysis ,Domain decomposition methods ,Basis function ,Hermitian matrix ,Computational Mathematics ,symbols.namesake ,Maxwell's equations ,symbols ,Stiffness matrix ,Mathematics - Abstract
In this paper we discuss numerical methods for solving time-harmonic Maxwell equations in three-dimensional absorbing media. We propose a new variational problem on the space spanned by the plane-wave basis functions for the discretization of these types of equations. The variational problem is derived mathematically from a minimization problem, and the resulting stiffness matrix is Hermitian positive definite. A simple domain decomposition preconditioner is introduced for the system generated by the proposed variational formula. Numerical results indicate that the approximate solution generated by the proposed discretization method is clearly more accurate than that generated by the ultra weak variational formulation method, and the proposed preconditioner is effective.
- Published
- 2014
12. A Filtered Lanczos Procedure for Extreme and Interior Eigenvalue Problems
- Author
-
Haw-ren Fang and Yousef Saad
- Subjects
Group (mathematics) ,Applied Mathematics ,Mathematical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Lanczos algorithm ,Hermitian matrix ,Projection (linear algebra) ,Computational Mathematics ,Matrix (mathematics) ,Lanczos resampling ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Applied mathematics ,Eigenvalues and eigenvectors ,Mathematics ,Sparse matrix - Abstract
When combined with Krylov projection methods, polynomial filtering can provide a powerful method for extracting extreme or interior eigenvalues of large sparse matrices. This general approach can be quite efficient in the situation when a large number of eigenvalues is sought. However, its competitiveness depends critically on a good implementation. This paper presents a technique based on such a combination to compute a group of extreme or interior eigenvalues of a real symmetric (or complex Hermitian) matrix. The technique harnesses the effectiveness of the Lanczos algorithm with partial reorthogonalization and the power of polynomial filtering. Numerical experiments indicate that the method can be far superior to competing algorithms when a large number of eigenvalues and eigenvectors is to be computed.
- Published
- 2012
13. A General Interpolation Strategy for Algebraic Multigrid Using Energy Minimization
- Author
-
Raymond S. Tuminaro, Jacob B. Schroder, and Luke N. Olson
- Subjects
Algebra ,Computational Mathematics ,Multigrid method ,Transfer operator ,Applied Mathematics ,Norm (mathematics) ,Conjugate gradient method ,Linear system ,Positive-definite matrix ,Generalized minimal residual method ,Hermitian matrix ,Mathematics - Abstract
Algebraic multigrid methods solve sparse linear systems $Ax=b$ by automatic construction of a multilevel hierarchy. This hierarchy is defined by grid transfer operators that must accurately capture algebraically smooth error relative to the relaxation method. We propose a methodology to improve grid transfers through energy minimization. The proposed strategy is applicable to Hermitian, non-Hermitian, definite, and indefinite problems. Each column of the grid transfer operator $P$ is minimized in an energy-based norm while enforcing two types of constraints: a defined sparsity pattern and preservation of specified modes in the range of $P$. A Krylov-based strategy is used to minimize energy, which is equivalent to solving $A P_j = \boldsymbol{0}$ for each column $j$ of $P$, with the constraints ensuring a nontrivial solution. For the Hermitian positive definite case, a conjugate gradient (CG-)based method is utilized to construct grid transfers, while methods based on generalized minimum residual (GMRES) and CG on the normal equations (CGNR) are explored for the general case. The approach is flexible, allowing for arbitrary coarsenings, unrestricted sparsity patterns, straightforward long-distance interpolation, and general use of constraints, either user-defined or auto-generated. We conclude with numerical evidence in support of the proposed framework.
- Published
- 2011
14. On Efficient Numerical Approximation of the Bilinear Form $c^*A^{-1}b$
- Author
-
Zdeněk Strakoš and Petr Tichý
- Subjects
Biconjugate gradient method ,Pure mathematics ,Applied Mathematics ,Lanczos algorithm ,Positive-definite matrix ,Bilinear form ,Hermitian matrix ,law.invention ,Combinatorics ,Computational Mathematics ,Matrix (mathematics) ,Invertible matrix ,law ,Product (mathematics) ,Mathematics - Abstract
Let $A\in\mathbb{C}^{N\times N}$ be a nonsingular complex matrix and $b$ and $c$ be complex vectors of length $N$. The goal of this paper is to investigate approaches for efficient approximations of the bilinear form $c^*A^{-1}b$. Equivalently, we wish to approximate the scalar value $c^*x$, where $x$ solves the linear system $Ax=b$. Here the matrix $A$ can be very large or its elements can be too costly to compute so that $A$ is not explicitly available and it is used only in the form of the matrix-vector product. Therefore a direct method is not an option. For $A$ Hermitian positive definite, $b^*A^{-1}b$ can be efficiently approximated as a by-product of the conjugate-gradient iterations, which is mathematically equivalent to the matching moment approximations computed via the Gauss-Christoffel quadrature. In this paper we propose a new method using the biconjugate gradient iterations which is applicable to the general complex case. The proposed approach will be compared with existing ones using analytic arguments and numerical experiments.
- Published
- 2011
15. Approximate Inverse Circulant-plus-Diagonal Preconditioners for Toeplitz-plus-Diagonal Matrices
- Author
-
Michael K. Ng and Jianyu Pan
- Subjects
Mathematical optimization ,Preconditioner ,Applied Mathematics ,Diagonal ,Positive-definite matrix ,Computer Science::Numerical Analysis ,Hermitian matrix ,Toeplitz matrix ,Mathematics::Numerical Analysis ,Computational Mathematics ,Conjugate gradient method ,Diagonal matrix ,Applied mathematics ,Circulant matrix ,Mathematics - Abstract
We consider the solutions of Hermitian positive definite Toeplitz-plus-diagonal systems $(T+D)x=b$, where $T$ is a Toeplitz matrix and $D$ is diagonal and positive. However, unlike the case of Toeplitz systems, no fast direct solvers have been developed for solving them. In this paper, we employ the preconditioned conjugate gradient method with approximate inverse circulant-plus-diagonal preconditioners to solving such systems. The proposed preconditioner can be constructed and implemented efficiently using fast Fourier transforms. We show that if the entries of $T$ decay away exponentially from the main diagonals, the preconditioned conjugate gradient method applied to the preconditioned system converges very quickly. Numerical examples including spatial regularization for image deconvolution application are given to illustrate the effectiveness of the proposed preconditioner.
- Published
- 2010
16. Computing and Deflating Eigenvalues While Solving Multiple Right-Hand Side Linear Systems with an Application to Quantum Chromodynamics
- Author
-
Andreas Stathopoulos and Kostas Orginos
- Subjects
Physics ,Tridiagonal matrix ,010308 nuclear & particles physics ,Applied Mathematics ,Linear system ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,System of linear equations ,01 natural sciences ,Hermitian matrix ,Computational Mathematics ,Lanczos resampling ,Matrix (mathematics) ,Conjugate gradient method ,0103 physical sciences ,Applied mathematics ,0101 mathematics ,Eigenvalues and eigenvectors - Abstract
We present a new algorithm that computes eigenvalues and eigenvectors of a Hermitian positive definite matrix while solving a linear system of equations with Conjugate Gradient (CG). Traditionally, all the CG iteration vectors could be saved and recombined through the eigenvectors of the tridiagonal projection matrix, which is equivalent theoretically to unrestarted Lanczos. Our algorithm capitalizes on the iteration vectors produced by CG to update only a small window of vectors that approximate the eigenvectors. While this window is restarted in a locally optimal way, the CG algorithm for the linear system is unaffected. Yet, in all our experiments, this small window converges to the required eigenvectors at a rate identical to unrestarted Lanczos. After the solution of the linear system, eigenvectors that have not accurately converged can be improved in an incremental fashion by solving additional linear systems. In this case, eigenvectors identified in earlier systems can be used to deflate, and thus accelerate, the convergence of subsequent systems. We have used this algorithm with excellent results in lattice QCD applications, where hundreds of right hand sides may be needed. Specifically, about 70 eigenvectors are obtained to full accuracy after solving 24 right hand sides. Deflating these from the large number of subsequent right hand sides removes the dreaded critical slowdown, where the conditioning of the matrix increases as the quark mass reaches a critical value. Our experiments show almost a constant number of iterations for our method, regardless of quark mass, and speedups of 8 over original CG for light quark masses.
- Published
- 2010
17. Nearly Optimal Preconditioned Methods for Hermitian Eigenproblems under Limited Memory. Part I: Seeking One Eigenvalue
- Author
-
Andreas Stathopoulos
- Subjects
Mathematical optimization ,Iterative method ,Applied Mathematics ,Numerical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Jacobi method ,Hermitian matrix ,Computational Mathematics ,symbols.namesake ,Robustness (computer science) ,Conjugate gradient method ,symbols ,Symmetric matrix ,Gradient method ,Mathematics - Abstract
Large, sparse, Hermitian eigenvalue problems are still some of the most computationally challenging tasks. Despite the need for a robust, nearly optimal preconditioned iterative method that can operate under severe memory limitations, no such method has surfaced as a clear winner. In this research we approach the eigenproblem from the nonlinear perspective, which helps us develop two nearly optimal methods. The first extends the recent Jacobi-Davidson conjugate gradient (JDCG) method to JDQMR, improving robustness and efficiency. The second method, generalized-Davidson+1 (GD+1), utilizes the locally optimal conjugate gradient recurrence as a restarting technique to achieve almost optimal convergence. We describe both methods within a unifying framework and provide theoretical justification for their near optimality. A choice between the most efficient of the two can be made at runtime. Our extensive experiments confirm the robustness, the near optimality, and the efficiency of our multimethod over other state-of-the-art methods.
- Published
- 2007
18. Iterative Validation of Eigensolvers: A Scheme for Improving the Reliability of Hermitian Eigenvalue Solvers
- Author
-
Andreas Stathopoulos and James R. McCombs
- Subjects
Mathematical optimization ,Iterative method ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Solver ,Hermitian matrix ,Computational Mathematics ,Robustness (computer science) ,Algorithm ,Block size ,Eigenvalues and eigenvectors ,Subspace topology ,Sparse matrix ,Mathematics - Abstract
Iterative eigenvalue solvers for large, sparse matrices may miss some of the required eigenvalues that are of high algebraic multiplicity or tightly clustered. Block methods, locking, a posteriori validation, or simply increasing the required accuracy are often used to avoid missing or to detect a missed eigenvalue, but each has its own shortcomings in robustness or performance. To resolve these shortcomings, we have developed a postprocessing algorithm, iterative validation of eigensolvers (IVE), that combines the advantages of each technique. IVE detects numerically multiple eigenvalues among the approximate eigenvalues returned by a given solver, adjusts the block size accordingly, then calls the given solver using locking to compute a new approximation in the subspace orthogonal to the current approximate eigenvectors. This process is repeated until no additional missed eigenvalues can be identified. IVE is general and can be applied as a wrapper to any Rayleigh-Ritz-based, Hermitian eigensolver. Our experiments show that IVE is very effective in computing missed eigenvalues even with eigensolvers that lack locking or block capabilities, although such capabilities may further enhance robustness. By focusing on robustness in a postprocessing stage, IVE allows the user to decouple the notion of robustness from that of performance when choosing the block size or the convergence tolerance.
- Published
- 2006
19. Recycling Krylov Subspaces for Sequences of Linear Systems
- Author
-
Michael L. Parks, Greg E. Mackey, Eric de Sturler, Spandan Maiti, and Duane D. Johnson
- Subjects
Mathematical optimization ,Numerical linear algebra ,Iterative method ,Applied Mathematics ,Linear system ,computer.software_genre ,Linear subspace ,Hermitian matrix ,Generalized minimal residual method ,Matrix multiplication ,Computational Mathematics ,Matrix (mathematics) ,computer ,Mathematics - Abstract
Many problems in science and engineering require the solution of a long sequence of slowly changing linear systems. We propose and analyze two methods that significantly reduce the total number of matrix-vector products required to solve all systems. We consider the general case where both the matrix and right-hand side change, and we make no assumptions regarding the change in the right-hand sides. Furthermore, we consider general nonsingular matrices, and we do not assume that all matrices are pairwise close or that the sequence of matrices converges to a particular matrix. Our methods work well under these general assumptions, and hence form a significant advancement with respect to related work in this area. We can reduce the cost of solving subsequent systems in the sequence by recycling selected subspaces generated for previous systems. We consider two approaches that allow for the continuous improvement of the recycled subspace at low cost. We consider both Hermitian and non-Hermitian problems, and we analyze our algorithms both theoretically and numerically to illustrate the effects of subspace recycling. We also demonstrate the effectiveness of our algorithms for a range of applications from computational mechanics, materials science, and computational physics.
- Published
- 2006
20. Block Triangular and Skew-Hermitian Splitting Methods for Positive-Definite Linear Systems
- Author
-
Zhong-Zhi Bai, Jun-Feng Yin, Lin-Zhang Lu, and Gene H. Golub
- Subjects
Computational Mathematics ,Matrix (mathematics) ,Matrix splitting ,Applied Mathematics ,Mathematical analysis ,Triangular matrix ,Block matrix ,Applied mathematics ,Positive-definite matrix ,System of linear equations ,Hermitian matrix ,Eigenvalues and eigenvectors ,Mathematics - Abstract
By further generalizing the concept of Hermitian (or normal) and skew-Hermitian splitting for a non-Hermitian and positive-definite matrix, we introduce a new splitting, called positive-definite and skew-Hermitian splitting (PSS), and then establish a class of PSS methods similar to the Hermitian (or normal) and skew-Hermitian splitting (HSS or NSS) method for iteratively solving the positive-definite systems of linear equations. Theoretical analysis shows that the PSS method converges unconditionally to the exact solution of the linear system, with the upper bound of its convergence factor dependent only on the spectrum of the positive-definite splitting matrix and independent of the spectrum of the skew-Hermitian splitting matrix as well as the eigenvectors of all matrices involved. When we specialize the PSS to block triangular (or triangular) and skew-Hermitian splitting (BTSS or TSS), the PSS method naturally leads to a BTSS or TSS iteration method, which may be more practical and efficient than the HSS and NSS iteration methods. Applications of the BTSS method to the linear systems of block two-by-two structures are discussed in detail. Numerical experiments further show the effectiveness of our new methods.
- Published
- 2005
21. Preconditioning Strategies for Hermitian Indefinite Toeplitz Linear Systems
- Author
-
Cristina Tablino-Possio, Stefano Serra-Capizzano, Thomas Huckle, Huckle, T, Serra Capizzano, S, and TABLINO POSSIO, C
- Subjects
Applied Mathematics ,Numerical analysis ,Linear system ,Connection (vector bundle) ,MathematicsofComputing_NUMERICALANALYSIS ,Toeplitz matrix ,Computer Science::Numerical Analysis ,Hermitian matrix ,Generalized minimal residual method ,Mathematics::Numerical Analysis ,MAT/08 - ANALISI NUMERICA ,Algebra ,Computational Mathematics ,generating function ,Biconjugate gradient stabilized method ,preconditioning ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Eigenvalues and eigenvectors ,Mathematics - Abstract
In this paper we propose and analyze preconditioning strategies for Hermitian indefinite linear systems by using indefinite preconditioners: under very elementary assumptions, we show that the eigenvalues are real. Moreover, in the case of multilevel Toeplitz structures, we prove distributional and localization results. These techniques used in connection with the CG, GMRES, BICGstab, and QMR algorithms allow us to solve in an optimal way the corresponding linear systems. A wide numerical experimentation confirms the efficiency of the proposed procedures.
- Published
- 2004
22. IRBL: An Implicitly Restarted Block-Lanczos Method for Large-Scale Hermitian Eigenproblems
- Author
-
Lothar Reichel, Daniela Calvetti, and James Baglama
- Subjects
Mathematical optimization ,Applied Mathematics ,Linear system ,MathematicsofComputing_NUMERICALANALYSIS ,Hermitian matrix ,Matrix multiplication ,Computational Mathematics ,Matrix (mathematics) ,Lanczos resampling ,Singular value ,Applied mathematics ,Eigenvalues and eigenvectors ,Sparse matrix ,Mathematics - Abstract
The irbleigs code is an implementation of an implicitly restarted block-Lanczos method for computing a few selected nearby eigenvalues and associated eigenvectors of a large, possibly sparse, Hermitian matrix A. The code requires only the evaluation of matrix-vector products with A; in particular, factorization of A is not demanded, nor is the solution of linear systems of equations with the matrix A. This, together with a fairly small storage requirement, makes the irbleigs code well suited for large-scale problems. Applications of the irbleigs code to certain generalized eigenvalue problems and to the computation of a few singular values and associated singular vectors are also discussed. Numerous computed examples illustrate the performance of the method and provide comparisons with other available codes.
- Published
- 2003
23. A Parallel Jacobi--Davidson-type Method for Solving Large Generalized Eigenvalue Problems in Magnetohydrodynamics
- Author
-
Margreet Nool and Auke van der Ploeg
- Subjects
Block LU decomposition ,Applied Mathematics ,Block matrix ,Jacobi method ,Domain decomposition methods ,Hermitian matrix ,Matrix decomposition ,Combinatorics ,Computational Mathematics ,symbols.namesake ,symbols ,Applied mathematics ,Eigenvalues and eigenvectors ,Cyclic reduction ,Mathematics - Abstract
We study the solution of generalized eigenproblems generated by a model which is used for stability investigation of tokamak plasmas. The eigenvalue problems are of the form $A x = \lambda B x$, in which the complex matrices A and B are block-tridiagonal, and B is Hermitian positive definite. The Jacobi--Davidson method appears to be an excellent method for parallel computation of a few selected eigenvalues because the basic ingredients are matrix vector products, vector updates, and inner products. The method is based on solving projected eigenproblems of order typically less than 30. We apply a complete block LU decomposition in which reordering strategies based on a combination of block cyclic reduction and domain decomposition result in a well-parallelizable algorithm. One decomposition can be used for the calculation of several eigenvalues. Spectral transformations are presented to compute certain interior eigenvalues and their associated eigenvectors. The convergence behavior of several variants of the Jacobi--Davidson algorithm is examined. Special attention is paid to the parallel performance, memory requirements, and prediction of the speed-up. Numerical results obtained on a distributed memory Cray T3E are shown.
- Published
- 2000
24. Fast Diagonalization of Large and Dense Complex Symmetric Matrices, with Applications to Quantum Reaction Dynamics
- Author
-
Ilan Bar-On and Victor Ryaboy
- Subjects
Applied Mathematics ,Hermitian matrix ,Schrödinger equation ,Householder transformation ,Algebra ,Computational Mathematics ,symbols.namesake ,EISPACK ,symbols ,Symmetric matrix ,Skew-symmetric matrix ,Quantum ,Eigenvalues and eigenvectors ,Mathematics - Abstract
We present a new fast and efficient algorithm for computing the eigenvalues and eigenvectors of large-size nondefective complex symmetric matrices. Our work was motivated by the emergence of this problem in recent methods for solving chemical reactive problems. The algorithm we present is similiar to the QR (QL) algorithm for complex Hermitian matrices, but we use complex orthogonal (not unitary) transformations. The new algorithm is faster by an order of magnitude than the corresponding EISPACK routine, and is also more amenable for modern parallel and vector supercomputers. We further present improved perturbation bounds for the Householder transformation, which lies at the basis of the whole transformation.
- Published
- 1997
25. Iterative Solution of the Eigenvalue Problem in Hopf Bifurcation for the Boussinesq Equations
- Author
-
Hans D. Mittelmann, G. P. Neitzel, K.-T. Chang, and D. F. Jankowski
- Subjects
Inverse iteration ,Hopf bifurcation ,Applied Mathematics ,Mathematical analysis ,Rayleigh quotient iteration ,Hermitian matrix ,Computational Mathematics ,Nonlinear system ,symbols.namesake ,symbols ,Divide-and-conquer eigenvalue algorithm ,Eigenvalues and eigenvectors ,Mathematics ,Linear stability - Abstract
Some of the most challenging eigenvalue problems arise in the stability analysis of solutions to parameter-dependent nonlinear partial differential equations. Linearized stability analysis requires the computation of a certain purely imaginary eigenvalue pair of a very large, sparse complex matrix pencil. A computational strategy, the core of which is a method of inverse iteration type with preconditioned conjugate gradients, is used to solve this problem for the stability of thermocapillary convection. This convection arises in the float-zone model of crystal growth governed by the Boussinesq equations. The results obtained complete the stability picture augmenting the energy stability results [Mittelmann, et al., SIAM J. Sci. Statist. Comput., 13 (1992), pp. 411–424] and recent experimental results. Here a real eigenvalue of a Hermitian eigenvalue problem had to be determined.
- Published
- 1994
26. On Adaptive Weighted Polynomial Preconditioning for Hermitian Positive Definite Matrices
- Author
-
Roland W. Freund and Bernd Fischer
- Subjects
Polynomial ,Preconditioner ,Applied Mathematics ,Computer Science::Numerical Analysis ,Hermitian matrix ,Mathematics::Numerical Analysis ,Matrix polynomial ,Algebra ,Computational Mathematics ,Stable polynomial ,Conjugate gradient method ,Chebyshev nodes ,Monic polynomial ,Mathematics - Abstract
The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed that aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, polynomial preconditioners associated with Bernstein–Szego weights are studied. Results of numerical exper...
- Published
- 1994
27. Fast Band-Toeplitz Preconditioners for Hermitian Toeplitz Systems
- Author
-
Ping-Tak Peter Tang and Raymond H. Chan
- Subjects
Discrete mathematics ,Approximation theory ,Iterative method ,Applied Mathematics ,Generating function ,Computer Science::Numerical Analysis ,Hermitian matrix ,Toeplitz matrix ,Mathematics::Numerical Analysis ,Computational Mathematics ,Conjugate gradient method ,Circulant matrix ,Condition number ,Mathematics - Abstract
This paper considers the solutions of Hermitian Toeplitz systems where the Toeplitz matrices are generated by nonnegative functions f. The preconditioned conjugate gradient method with well-known circulant preconditioners fails in the case when f has zeros. This paper employs Toeplitz matrices of fixed bandwidth as preconditioners. Their generating functions g are trigonometric polynomials of fixed degree and are determined by minimizing the maximum relative error $||(f - g)/f||_\infty $. It is shown that the condition number of systems preconditioned by the band-Toeplitz matrices are $O(1)$ for f, with or without zeros. When f is positive, the preconditioned systems converge at the same rate as other well-known circulant preconditioned systems. An a priori bound of the number of iterations required for convergence is also given.
- Published
- 1994
28. Variants of BICGSTAB for Matrices with Complex Spectrum
- Author
-
Martin H. Gutknecht
- Subjects
Computational Mathematics ,Biconjugate gradient method ,Polynomial ,Complex conjugate ,Biconjugate gradient stabilized method ,Applied Mathematics ,Conjugate gradient method ,Mathematical analysis ,Applied mathematics ,Lanczos algorithm ,Hermitian matrix ,Eigenvalues and eigenvectors ,Mathematics - Abstract
Recently Van der Vorst [SIAM J. Sci. Statist. Comput.,13 (1992), pp. 631–644] proposed for solving nonsymmetric linear systems $Az = b$ a biconjugate gradient (BICG)-based Krylov space method called BICGSTAB that, like the biconjugate gradient squared (BICGS) method of Sonneveld, does not require matrix–vector multiplications with the transposed matrix $A^T $, and that has typically a much smoother convergence behavior than BICG and BICGS. Its nth residual polynomial is the product of the one of BICG (i.e., the nth Lanczos polynomial) with a polynomial of the same degree with real zeros. Therefore, nonreal eigenvalues of A are not approximated well by the second polynomial factor. Here, the author presents for real nonsymmetric matrices a method BICGSTAB2 in which the second factor may have complex conjugate zeros. Moreover, versions suitable for complex matrices are given for both methods.
- Published
- 1993
29. Fast Iterative Solvers for Toeplitz-Plus-Band Systems
- Author
-
Raymond H. Chan and Kwok-Po Ng
- Subjects
Combinatorics ,Computational Mathematics ,Band matrix ,Applied Mathematics ,Conjugate gradient method ,Mathematical analysis ,Uniform boundedness ,Positive-definite matrix ,Differential operator ,Binary logarithm ,Hermitian matrix ,Toeplitz matrix ,Mathematics - Abstract
The authors consider the solutions of Hermitian Toeplitz-plus-band systems $(A_n + B_n )x = b$, where $A_n $ are n-by-n Toeplitz matrices and $B_n $ are n-by-n band matrices with bandwidth independent of n. These systems appear in solving integrodifferential equations and signal processing. However, unlike the case of Toeplitz systems, no fast direct solvers have been developed for solving them. In this paper, the preconditioned conjugate gradient method with band matrices as preconditioners is used. The authors prove that if $A_n $ is generated by a nonnegative piecewise continuous function and $B_n $ is positive semidefinite, then there exists a band matrix $C_n $, with bandwidth independent of n, such that the spectra of $C_n^{ - 1} (A_n + B_n )$ are uniformly bounded by a constant independent of n. In particular, we show that the solution of $(A_n + B_n )x = b$ can be obtained in $O(n\log n)$ operations.
- Published
- 1993
30. Some Aspects of Circulant Preconditioners
- Author
-
Thomas Huckle
- Subjects
Algebra ,Combinatorics ,Computational Mathematics ,Approximations of π ,Applied Mathematics ,Conjugate gradient method ,Matrix norm ,Hermitian matrix ,Circulant matrix ,Toeplitz matrix ,Eigenvalues and eigenvectors ,Linear equation ,Mathematics - Abstract
This paper studies circulant approximations of Hermitian Toeplitz matrices that are solutions of certain minimization problems relative to the Frobenius norm, the $lub_1 $, or the $lub_\infty $ norms. The examinations supplement the family of the so-called optimal and superoptimal preconditioners that have been proposed for the preconditioned conjugate gradient method for solving Toeplitz systems $T_n x = b$. For the new Frobenius norm approximations it is shown that they can be computed in $O(n\log (n))$ arithmetic operations, and that the eigenvalues of the preconditioned linear equations are asymptotically clustered around 1 if $T_n $ is a leading principal submatrix of an infinite Toeplitz matrix connected with a positive function of the Wiener class.
- Published
- 1993
31. A Transpose-Free Quasi-Minimal Residual Algorithm for Non-Hermitian Linear Systems
- Author
-
Roland W. Freund
- Subjects
Computational Mathematics ,Biconjugate gradient method ,Applied Mathematics ,Conjugate gradient method ,Transpose ,Linear system ,Symmetric matrix ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Residual ,Hermitian matrix ,Algorithm ,Sparse matrix ,Mathematics - Abstract
The biconjugate gradient method (BCG) for solving general non-Hermitian linear systems $Ax = b$ and its transpose-free variant, the conjugate gradients squared algorithm (CGS), both typically exhib...
- Published
- 1993
32. A Note on Computing Eigenvalues of Banded Hermitian Toeplitz Matrices
- Author
-
William F. Trench
- Subjects
Combinatorics ,Computational Mathematics ,Applied Mathematics ,Bandwidth (signal processing) ,Condensed Matter::Strongly Correlated Electrons ,Hermitian matrix ,Eigenvalues and eigenvectors ,Toeplitz matrix ,Mathematics - Abstract
It is pointed out that the author’s $O(n^2 )$ algorithm for computing individual eigenvalues of an arbitrary $n \times n$ Hermitian Toeplitz matrix $T_n $ reduces to an $O(rn)$ algorithm if $T_n $ is banded, with bandwidth r.
- Published
- 1993
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.