25 results on '"Bidiagonalization"'
Search Results
2. LNLQ: An Iterative Method for Least-Norm Problems with an Error Minimization Property
- Author
-
Dominique Orban, Michael A. Saunders, and Ron Estrin
- Subjects
021103 operations research ,Iterative method ,Bidiagonalization ,Norm (mathematics) ,0211 other engineering and technologies ,Applied mathematics ,010103 numerical & computational mathematics ,02 engineering and technology ,Minification ,0101 mathematics ,01 natural sciences ,Analysis ,Mathematics - Abstract
We describe LNLQ for solving the least-norm problem min $\|x\|$ subject to $Ax=b$, using the Golub--Kahan bidiagonalization of $[b\ A]$. Craig's method is known to be equivalent to applying the con...
- Published
- 2019
- Full Text
- View/download PDF
3. SPMR: A Family of Saddle-Point Minimum Residual Solvers
- Author
-
Ron Estrin and Chen Greif
- Subjects
Applied Mathematics ,Monotonic function ,010103 numerical & computational mathematics ,Residual ,01 natural sciences ,Symmetry (physics) ,010101 applied mathematics ,Computational Mathematics ,Matrix (mathematics) ,Bidiagonalization ,Saddle point ,Applied mathematics ,0101 mathematics ,Saddle ,Subspace topology ,Mathematics - Abstract
We introduce a new family of saddle-point minimum residual methods for iteratively solving saddle-point systems using a minimum or quasi-minimum residual approach. No symmetry assumptions are made. The basic mechanism underlying the method is a novel simultaneous bidiagonalization procedure that yields a simplified saddle-point matrix on a projected Krylov-like subspace and allows for a monotonic short-recurrence iterative scheme. We develop a few variants, demonstrate the advantages of our approach, derive optimality conditions, and discuss connections to existing methods. Numerical experiments illustrate the merits of this new family of methods.
- Published
- 2018
- Full Text
- View/download PDF
4. LSMB: Minimizing the Backward Error for Least-Squares Problems
- Author
-
Ming Gu and Eric Hallman
- Subjects
Iterative method ,Bidiagonalization ,Conjugate gradient method ,010102 general mathematics ,Process (computing) ,Applied mathematics ,010103 numerical & computational mathematics ,0101 mathematics ,01 natural sciences ,Least squares ,Analysis ,Mathematics ,Sparse matrix - Abstract
An iterative method, LSMB, is given for solving $\min_x \|Ax-b\|_2$. LSMB is based on the Golub--Kahan bidiagonalization process and is constructed so that an objective function closely related to ...
- Published
- 2018
- Full Text
- View/download PDF
5. Numerical Equivalences among Krylov Subspace Algorithms for Skew-Symmetric Matrices
- Author
-
Christopher C. Paige, David Titley-Peloquin, James M. Varah, and Chen Greif
- Subjects
010102 general mathematics ,Linear system ,010103 numerical & computational mathematics ,Krylov subspace ,01 natural sciences ,Matrix (mathematics) ,Lanczos resampling ,Bidiagonalization ,Conjugate gradient method ,Skew-symmetric matrix ,0101 mathematics ,Equivalence (measure theory) ,Algorithm ,Analysis ,Mathematics - Abstract
We briefly review Krylov subspace methods based on the Galerkin and minimum residual conditions for solving $Ax=b$ with real $A$ and $b$, followed by two implementations: the conjugate gradient (CG) based methods CGNE and CGNR. We then show the numerical equivalence of Lanczos tridiagonalization and Golub--Kahan bidiagonalization for any real skew-symmetric matrix $A$. We give short derivations of two algorithms for solving $Ax=b$ with skew-symmetric $A$ and use the above equivalence to show that these are numerically equivalent to the Golub--Kahan bidiagonalization variants of CGNE and CGNR. These last two numerical equivalences add to the theoretical equivalences in the work by Eisenstat [Equivalence of Krylov Subspace Methods for Skew-Symmetric Linear Systems, Department of Computer Science, Yale University, preprint, arXiv:1512.00311, 2015] that unified and extended earlier work. We next present a method based on the Lanczos tridiagonalization process for minimizing $\|A^T (b-Ax_k)\|_2$ when $A^T\! =-...
- Published
- 2016
- Full Text
- View/download PDF
6. Computation of Generalized Matrix Functions
- Author
-
Michele Benzi, Francesca Arrigo, Caterina Fenu, Arrigo, Francesca, Benzi, Michele, and Fenu, Caterina
- Subjects
Multilinear algebra ,Computation ,Generalized matrix function ,Golub-Kahan bidiagonalization ,Numerical Analysis (math.NA) ,010103 numerical & computational mathematics ,Gauss quadrature ,01 natural sciences ,Network communicability ,010101 applied mathematics ,Algebra ,Settore MAT/08 - Analisi Numerica ,symbols.namesake ,Bidiagonalization ,Matrix function ,Block (telecommunications) ,FOS: Mathematics ,symbols ,Gaussian quadrature ,Mathematics - Numerical Analysis ,0101 mathematics ,QA ,Analysis ,Mathematics - Abstract
We develop numerical algorithms for the efficient evaluation of quantities associated with generalized matrix functions [J. B. Hawkins and A. Ben-Israel, Linear and Multilinear Algebra 1(2), 1973, pp. 163-171]. Our algorithms are based on Gaussian quadrature and Golub--Kahan bidiagonalization. Block variants are also investigated. Numerical experiments are performed to illustrate the effectiveness and efficiency of our techniques in computing generalized matrix functions arising in the analysis of networks., Comment: 25 paged, 2 figures
- Published
- 2016
- Full Text
- View/download PDF
7. Band Generalization of the Golub--Kahan Bidiagonalization, Generalized Jacobi Matrices, and the Core Problem
- Author
-
Iveta Hnětynková, Zdeněk Strakoš, and Martin Plešinger
- Subjects
Combinatorics ,Generalization ,Bidiagonalization ,Singular value decomposition ,Core (graph theory) ,Zero (complex analysis) ,Extension (predicate logic) ,Total least squares ,Approx ,Analysis ,Mathematics - Abstract
The concept of the core problem in total least squares (TLS) problems with single right-hand side introduced in [C. C. Paige and Z. Strakos, SIAM J. Matrix Anal. Appl., 27 (2005), pp. 861--875] separates necessary and sufficient information for solving the problem from redundancies and irrelevant information contained in the data. It is based on orthogonal transformations such that the resulting problem decomposes into two independent parts. One of the parts has nonzero right-hand side and minimal dimensions and it always has the unique TLS solution. The other part has trivial (zero) right-hand side and maximal dimensions. Assuming exact arithmetic, the core problem can be obtained by the Golub--Kahan bidiagonalization. Extension of the core concept to the multiple right-hand sides case $AX\approx B$ in [I. Hnětynkova, M. Plesinger, and Z. Strakos, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 917--931], which is highly nontrivial, is based on application of the singular value decomposition. In this paper we...
- Published
- 2015
- Full Text
- View/download PDF
8. A Preconditioned Hybrid SVD Method for Accurately Computing Singular Triplets of Large Matrices
- Author
-
Andreas Stathopoulos and Lingfei Wu
- Subjects
Computational Mathematics ,Matrix (mathematics) ,Singular value ,Lanczos resampling ,Bidiagonalization ,Applied Mathematics ,Singular value decomposition ,MathematicsofComputing_NUMERICALANALYSIS ,Augmented matrix ,Algorithm ,Eigenvalues and eigenvectors ,Sparse matrix ,Mathematics - Abstract
The computation of a few singular triplets of large, sparse matrices is a challenging task, especially when the smallest magnitude singular values are needed in high accuracy. Most recent efforts try to address this problem through variations of the Lanczos bidiagonalization method, but they are still challenged even for medium matrix sizes due to the difficulty of the problem. We propose a novel SVD approach that can take advantage of preconditioning and of any well-designed eigensolver to compute both largest and smallest singular triplets. Accuracy and efficiency is achieved through a hybrid, two-stage meta-method, PHSVDS. In the first stage, PHSVDS solves the normal equations up to the best achievable accuracy. If further accuracy is required, the method switches automatically to an eigenvalue problem with the augmented matrix. Thus it combines the advantages of the two stages, faster convergence and accuracy, respectively. For the augmented matrix, solving the interior eigenvalue is facilitated by pr...
- Published
- 2015
- Full Text
- View/download PDF
9. Stability of Two Direct Methods for Bidiagonalization and Partial Least Squares
- Author
-
Åke Björck
- Subjects
Sequence ,Iterative method ,Stability (learning theory) ,Least squares ,Combinatorics ,partial least squares ,bidiagonalization ,core problem ,stability ,regression ,NIPALS ,Householder reflector ,modified Gram-Schmidt orthogonalization ,Bidiagonalization ,Naturvetenskap ,Partial least squares regression ,Natural Sciences ,Orthogonalization ,Algorithm ,Analysis ,Moore–Penrose pseudoinverse ,Mathematics - Abstract
The partial least squares (PLS) method computes a sequence of approximate solutions x(k) is an element of K-k (A(T) A, A(T) b), k = 1, 2, ..., to the least squares problem min(x) parallel to Ax - b parallel to(2). If carried out to completion, the method always terminates with the pseudoinverse solution x(dagger) = A(dagger)b. Two direct PLS algorithms are analyzed. The first uses the Golub-Kahan Householder algorithm for reducing A to upper bidiagonal form. The second is the NIPALS PLS algorithm, due to Wold et al., which is based on rank-reducing orthogonal projections. The Householder algorithm is known to be mixed forward-backward stable. Numerical results are given, that support the conjecture that the NIPALS PLS algorithm shares this stability property. We draw attention to a flaw in some descriptions and implementations of this algorithm, related to a similar problem in Gram-Schmidt orthogonalization, that spoils its otherwise excellent stability. For large-scale sparse or structured problems, the iterative algorithm LSQR is an attractive alternative, provided an implementation with reorthogonalization is used.
- Published
- 2014
- Full Text
- View/download PDF
10. Generalized Golub--Kahan Bidiagonalization and Stopping Criteria
- Author
-
Mario Arioli
- Subjects
Mathematical optimization ,Partial differential equation ,Rate of convergence ,Bidiagonalization ,Conjugate gradient method ,Singular value decomposition ,Estimator ,Applied mathematics ,Upper and lower bounds ,Analysis ,Mathematics::Numerical Analysis ,Sparse matrix ,Mathematics - Abstract
The Golub--Kahan bidiagonalization algorithm has been widely used in solving least-squares problems and in the computation of the SVD of rectangular matrices. Here we propose an algorithm based on the Golub--Kahan process for the solution of augmented systems that minimizes the norm of the error and, in particular, we propose a novel estimator of the error similar to the one proposed by Hestenes and Stiefel for the conjugate gradient method and later developed by Golub, Meurant, and Strakos. This estimator gives a lower bound for the error, and can be used as a stopping criterion for the whole process. We also propose an upper bound of the error based on Gauss--Radau quadrature. Finally, we show how we can transform augmented systems arising from the mixed finite-element approximation of partial differential equations in order to achieve a convergence rate independent of the finite dimensional problem size.
- Published
- 2013
- Full Text
- View/download PDF
11. Divide and Conquer Low-Rank Preconditioners for Symmetric Matrices
- Author
-
Ruipeng Li and Yousef Saad
- Subjects
Divide and conquer algorithms ,Preconditioner ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Low-rank approximation ,Krylov subspace ,Incomplete LU factorization ,Computer Science::Numerical Analysis ,Mathematics::Numerical Analysis ,Algebra ,Computational Mathematics ,Matrix (mathematics) ,Bidiagonalization ,Symmetric matrix ,Applied mathematics ,Mathematics - Abstract
This paper presents a preconditioning method based on an approximate inverse of the original matrix, computed recursively from a multilevel low-rank (MLR) expansion approach. The basic idea is to recursively divide the problem in two and apply a low-rank approximation to a matrix obtained from the Sherman--Morrison formula. The low-rank expansion is computed by a few steps of the Lanczos bidiagonalization procedure. The MLR preconditioner has been motivated by its potential for exploiting different levels of parallelism on modern high-performance platforms, though this feature is not yet tested in this paper. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems.
- Published
- 2013
- Full Text
- View/download PDF
12. Stable Computation of the CS Decomposition: Simultaneous Bidiagonalization
- Author
-
Brian D. Sutton
- Subjects
Combinatorics ,Singular value ,Bidiagonalization ,Computation ,Singular value decomposition ,Decomposition (computer science) ,Applied mathematics ,Unitary matrix ,Generalized singular value decomposition ,Analysis ,Eigenvalues and eigenvectors ,Mathematics - Abstract
Since its discovery in 1977, the CS decomposition (CSD) has resisted computation, even though it is a sibling of the well-understood eigenvalue and singular value decompositions. Several algorithms have been developed for the reduced 2-by-1 form of the decomposition, but none have been extended to the complete 2-by-2 form of the decomposition in Stewart's original paper. In this article, we present an algorithm for simultaneously bidiagonalizing the four blocks of a unitary matrix partitioned into a 2-by-2 block structure. This serves as the first, direct phase of a two-stage algorithm for the CSD, much as Golub-Kahan-Reinsch bidiagonalization serves as the first stage in computing the singular value decomposition. Backward stability is proved.
- Published
- 2012
- Full Text
- View/download PDF
13. A Refined Harmonic Lanczos Bidiagonalization Method and an Implicitly Restarted Algorithm for Computing the Smallest Singular Triplets of Large Matrices
- Author
-
Zhongxiao Jia and Datian Niu
- Subjects
Applied Mathematics ,Mathematical analysis ,65F15, 15A18 ,Harmonic (mathematics) ,Numerical Analysis (math.NA) ,Computer Science::Numerical Analysis ,Projection (linear algebra) ,Mathematics::Numerical Analysis ,Computational Mathematics ,Lanczos resampling ,Matrix (mathematics) ,Singular value ,Bidiagonalization ,Singular value decomposition ,FOS: Mathematics ,Mathematics - Numerical Analysis ,Linear independence ,Algorithm ,Mathematics - Abstract
The harmonic Lanczos bidiagonalization method can be used to compute the smallest singular triplets of a large matrix $A$. We prove that for good enough projection subspaces harmonic Ritz values converge if the columns of $A$ are strongly linearly independent. On the other hand, harmonic Ritz values may miss some desired singular values when the columns of $A$ almost linearly dependent. Furthermore, harmonic Ritz vectors may converge irregularly and even may fail to converge. Based on the refined projection principle for large matrix eigenproblems due to the first author, we propose a refined harmonic Lanczos bidiagonalization method that takes the Rayleigh quotients of the harmonic Ritz vectors as approximate singular values and extracts the best approximate singular vectors, called the refined harmonic Ritz approximations, from the given subspaces in the sense of residual minimizations. The refined approximations are shown to converge to the desired singular vectors once the subspaces are sufficiently good and the Rayleigh quotients converge. An implicitly restarted refined harmonic Lanczos bidiagonalization algorithm (IRRHLB) is developed. We study how to select the best possible shifts, and suggest refined harmonic shifts that are theoretically better than the harmonic shifts used within the implicitly restarted Lanczos bidiagonalization algorithm (IRHLB). We propose a novel procedure that can numerically compute the refined harmonic shifts efficiently and accurately. Numerical experiments are reported that compare IRRHLB with five other algorithms based on the Lanczos bidiagonalization process. It appears that IRRHLB is at least competitive with them and can be considerably more efficient when computing the smallest singular triplets., 31 pages, 14 figures, SIAM Journal on Scientific Computing, to appear
- Published
- 2010
- Full Text
- View/download PDF
14. A Useful Form of Unitary Matrix Obtained from Any Sequence of Unit 2-Norm n-Vectors
- Author
-
Christopher C. Paige
- Subjects
Combinatorics ,Algebra ,Matrix (mathematics) ,Bidiagonalization ,Singular value decomposition ,Orthogonal matrix ,Unitary matrix ,Square matrix ,Orthogonalization ,Analysis ,Mathematics ,QR decomposition - Abstract
Charles Sheffield pointed out that the modified Gram-Schmidt (MGS) orthogonalization algorithm for the QR factorization of $B\!\in\!\R^{n\times k}$ is mathematically equivalent to the QR factorization applied to the matrix $B$ augmented with a $k\times k$ matrix of zero elements on top. This is true in theory for any method of QR factorization, but for Householder's method it is true in the presence of rounding errors as well. This knowledge has been the basis for several successful but difficult rounding error analyses of algorithms which in theory produce orthogonal vectors but significantly fail to do so because of rounding errors. Here we show that the same results can be found more directly and easily without recourse to the MGS connection. It is shown that for any sequence of $k$ unit 2-norm $n$-vectors there is a special $(n\!+\!k)$-square unitary matrix which we call a unitary augmentation of these vectors and that this matrix can be used in the analyses without appealing to the MGS connection. We describe the connection of this unitary matrix to Householder matrices. The new approach is applied to an earlier analysis to illustrate both the improvement in simplicity and advantages for future analyses. Some properties of this unitary matrix are derived. The main theorem on orthogonalization is then extended to cover the case of biorthogonalization.
- Published
- 2009
- Full Text
- View/download PDF
15. A Projection‐Based Approach to General‐Form Tikhonov Regularization
- Author
-
Malena I. Espan˜ol, Per Christian Hansen, and Misha E. Kilmer
- Subjects
Standard form ,Mathematical optimization ,Numerical linear algebra ,Iterative method ,Applied Mathematics ,Numerical analysis ,computer.software_genre ,Regularization (mathematics) ,Tikhonov regularization ,Computational Mathematics ,Bidiagonalization ,Applied mathematics ,A priori and a posteriori ,computer ,Mathematics - Abstract
We present a projection-based iterative algorithm for computing general-form Tikhonov regularized solutions to the problem $\min_x\{\| Ax-b \|_2^2+\lambda^2\| Lx \|_2^2\}$, where the regularization matrix $L$ is not the identity. Our algorithm is designed for the common case where $\lambda$ is not known a priori. It is based on a joint bidiagonalization algorithm and is appropriate for large-scale problems when it is computationally infeasible to transform the regularized problem to standard form. By considering the projected problem, we show how estimates of the corresponding optimal regularization parameter can be efficiently obtained. Numerical results illustrate the promise of our projection-based approach.
- Published
- 2007
- Full Text
- View/download PDF
16. Augmented Implicitly Restarted Lanczos Bidiagonalization Methods
- Author
-
James Baglama and Lothar Reichel
- Subjects
Mathematical optimization ,Iterative method ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Harmonic (mathematics) ,Lanczos approximation ,Computer Science::Numerical Analysis ,Mathematics::Numerical Analysis ,Computational Mathematics ,Lanczos resampling ,Singular value ,Matrix (mathematics) ,Bidiagonalization ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Singular value decomposition ,Applied mathematics ,Mathematics - Abstract
New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes.
- Published
- 2005
- Full Text
- View/download PDF
17. Core Problems in Linear Algebraic Systems
- Author
-
Zdenek Strakos and Christopher C. Paige
- Subjects
Combinatorics ,Bidiagonalization ,Core (graph theory) ,Singular value decomposition ,Linear system ,Applied mathematics ,Algebraic number ,Approx ,Total least squares ,Least squares ,Analysis ,Mathematics - Abstract
For any linear system $A x \approx b$ we define a set of core problems and show that the orthogonal upper bidiagonalization of $[b, A]$ gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved by first finding a core problem, we show the resulting theory is consistent with earlier generalizations, but much simpler and clearer. The approach is important for other related solutions and leads, for example, to an elegant solution to the data least squares problem. The ideas could be useful for solving ill-posed problems.
- Published
- 2005
- Full Text
- View/download PDF
18. Tikhonov Regularization with a Solution Constraint
- Author
-
Lothar Reichel and Daniela Calvetti
- Subjects
Applied Mathematics ,Numerical analysis ,Mathematical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Regularization perspectives on support vector machines ,Backus–Gilbert method ,Regularization (mathematics) ,Mathematics::Numerical Analysis ,Tikhonov regularization ,Computational Mathematics ,Lanczos resampling ,symbols.namesake ,Bidiagonalization ,symbols ,Applied mathematics ,Gaussian quadrature ,Mathematics - Abstract
Many numerical methods for the solution of linear ill-posed problems apply Tikhonov regularization. This paper presents a modification of a numerical method proposed by Golub and von Matt for quadratically constrained least-squares problems and applies it to Tikhonov regularization of large-scale linear discrete ill-posed problems. The method is based on partial Lanczos bidiagonalization and Gauss quadrature. Computed examples illustrating its performance are presented.
- Published
- 2004
- Full Text
- View/download PDF
19. An Implicitly Restarted Refined Bidiagonalization Lanczos Method for Computing a Partial Singular Value Decomposition
- Author
-
Zhongxiao Jia and Datian Niu
- Subjects
Mathematical optimization ,Lanczos resampling ,Singular value ,Matrix (mathematics) ,Bidiagonalization ,Convergence (routing) ,Linear algebra ,MathematicsofComputing_NUMERICALANALYSIS ,Applied mathematics ,Lanczos algorithm ,Analysis ,Ritz method ,Mathematics - Abstract
The bidiagonalization Lanczos method can be used for computing a few of the largest or smallest singular values and corresponding singular vectors of a large matrix, but the method may encounter some convergence problems. In this paper the convergence of the method is analyzed, showing why it may converge erratically and perhaps fail to converge. To correct this possible nonconvergence and improve the method, a refined bidiagonalization Lanczos method is proposed. The implicitly restarting technique due to Sorensen is applied to the method, and an implicitly restarted refined bidiagonalization Lanczos algorithm (IRRBL) is developed. A new selection of shifts is proposed for use within IRRBL, called refined shifts, and a reliable and efficient algorithm is developed for computing the refined shifts. Numerical experiments show that IRRBL can perform better than the implicitly restarted bidiagonalization Lanczos algorithm (IRBL) proposed by Larsen, in particular when the smallest singular triplets are desired.
- Published
- 2003
- Full Text
- View/download PDF
20. More Accurate Bidiagonal Reduction for Computing the Singular Value Decomposition
- Author
-
Jesse L. Barlow
- Subjects
Combinatorics ,Discrete mathematics ,Singular value ,Matrix (mathematics) ,Bidiagonalization ,Singular value decomposition ,Triangular matrix ,Bidiagonal matrix ,Orthogonal matrix ,Permutation matrix ,Analysis ,Mathematics - Abstract
Bidiagonal reduction is the preliminary stage for the fastest stable algorithms for computing the singular value decomposition (SVD) now available. However, the best-known error bounds on bidiagonal reduction methods on any matrix are of the form \[ A + \delta A = UBV^T , \] \[ \|\delta A\|_2 \leq \varepsilon_M f(m,n) \|A\|_2, \] where B is bidiagonal, U and V are orthogonal, $\varepsilon_M$ is machine precision, and f(m,n) is a modestly growing function of the dimensions of A. A preprocessing technique analyzed by Higham [Linear Algebra Appl., 309 (2000), pp. 153--174] uses orthogonal factorization with column pivoting to obtain the factorization \[ A=Q \left( \begin{array}{c} C^T \\ 0 \end{array} \right) P^T, \] where Q is orthogonal, C is lower triangular, and P is permutation matrix. Bidiagonal reduction is applied to the resulting matrix C. To do that reduction, a new Givens-based bidiagonalization algorithm is proposed that produces a bidiagonal matrix B that satisfies $C + \delta C = U (B + \delta B ) V^T$ where $\delta B$ is bounded componentwise and $\delta C$ satisfies a columnwise bound (based upon the growth of the lower right corner of C) with U and V orthogonal to nearly working precision. Once we have that reduction, there is a good menu of algorithms that obtain the singular values of the bidiagonal matrix B to relative accuracy, thus obtaining an SVD of C that can be much more accurate than that obtained from standard bidiagonal reduction procedures. The additional operations required over the standard bidiagonal reduction algorithm of Golub and Kahan [J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2 (1965), pp. 205--224] are those for using Givens rotations instead of Householder transformations to compute the matrix V, and 2n3/3 flops to compute column norms.
- Published
- 2002
- Full Text
- View/download PDF
21. Low-Rank Matrix Approximation Using the Lanczos Bidiagonalization Process with Applications
- Author
-
Hongyuan Zha and Horst D. Simon
- Subjects
Mathematical optimization ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Low-rank approximation ,Lanczos approximation ,Computer Science::Numerical Analysis ,Computational Mathematics ,Matrix (mathematics) ,Lanczos resampling ,Bidiagonalization ,Singular value decomposition ,Applied mathematics ,Matrix calculus ,Sparse matrix ,Mathematics - Abstract
Low-rank approximation of large and/or sparse matrices is important in many applications, and the singular value decomposition (SVD) gives the best low-rank approximations with respect to unitarily-invariant norms. In this paper we show that good low-rank approximations can be directly obtained from the Lanczos bidiagonalization process applied to the given matrix without computing any SVD. We also demonstrate that a so-called one-sided reorthogonalization process can be used to maintain an adequate level of orthogonality among the Lanczos vectors and produce accurate low-rank approximations. This technique reduces the computational cost of the Lanczos bidiagonalization process. We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas.
- Published
- 2000
- Full Text
- View/download PDF
22. SVD Computations on the Connection Machine CM-5/CM-5E: Implementation and Accuracy
- Author
-
Susanne M. Balle and Palle M. Pedersen
- Subjects
Inverse iteration ,Tridiagonal matrix ,Applied Mathematics ,Parallel computing ,Computer Science::Numerical Analysis ,Computational Mathematics ,Singular value ,Bidiagonalization ,Singular value decomposition ,Bidiagonal matrix ,Algorithm ,Eigenvalues and eigenvectors ,Mathematics ,Sparse matrix - Abstract
The singular value decomposition (SVD) plays an essential role in many applications. One technique for obtaining the SVD of a real dense matrix is to first reduce the dense matrix to bidiagonal form and then compute the SVD of the bidiagonal matrix. Guidelines followed to implement this approach efficiently on the Connection Machine CM-5/CM-5E are described. Timing results show that use of the described techniques yields up to 44% of peak performance in the reduction from dense to bidiagonal form. The singular values are achieved by applying the parallel bisection algorithm to the symmetric tridiagonal matrix obtained by doing a perfect-shuffle permutation of the rows and columns of [[0 BT];[B 0]], where B is the bidiagonal matrix. The singular vectors are computed using inverse iteration on the symmetric tridiagonal matrix. SVD experiments done on bidiagonal matrices are presented and illustrate that this approach yields very good performance as well as orthogonal singular vectors even for ill-conditioned matrices as long as the singular values are computed to very high accuracy. The main conclusion is that extra accuracy in the eigenvalue computation very often enhance orthogonality of the eigenvectors computed with inverse iteration to the point where reorthogonalization is not needed. The conclusions presented regarding the SVD's numerical properties are general to the algorithm and not specific to the Connection Machine implementation.
- Published
- 1997
- Full Text
- View/download PDF
23. A Bidiagonalization-Regularization Procedure for Large Scale Discretizations of Ill-Posed Problems
- Author
-
Dianne P. O'Leary and John A. Simmons
- Subjects
Well-posed problem ,Mathematical optimization ,Lanczos resampling ,Discretization ,Bidiagonalization ,Lanczos algorithm ,Applied mathematics ,Regularization (mathematics) ,Subspace topology ,Linear least squares ,Mathematics - Abstract
In this paper, we consider ill-posed problems which discretize to linear least squares problems with matrices K of high dimensions. The algorithm proposed uses K only as an operator and does not need to explicitly store or modify it. A method related to one of Lanczos is used to project the problem onto a subspace for which K is bidiagonal. It is then an easy matter to solve the projected problem by standard regularization techniques. These ideas are illustrated with some integral equations of the first kind with convolution kernels, and sample numerical results are given.
- Published
- 1981
- Full Text
- View/download PDF
24. Bidiagonalization of Matrices and Solution of Linear Equations
- Author
-
Christopher C. Paige
- Subjects
Numerical Analysis ,Basis (linear algebra) ,Applied Mathematics ,Computer Science::Numerical Analysis ,Mathematics::Numerical Analysis ,Algebra ,Computational Mathematics ,Singular value ,Matrix (mathematics) ,Bidiagonalization ,Matrix analysis ,Linear least squares ,Eigenvalues and eigenvectors ,Sparse matrix ,Mathematics - Abstract
An algorithm given by Golub and Kahan [2] for reducing a general matrix to bidiagonal form is shown to be very important for large sparse matrices. The singular values of the matrix are those of the bidiagonal form, and these can be easily computed. The bidiagonalization algorithm is shown to be the basis of important methods for solving the linear least squares problem for large sparse matrices. Eigenvalues of certain 2-cyclic matrices can also be efficiently computed using this bidiagonalization.
- Published
- 1974
- Full Text
- View/download PDF
25. Calculating the Singular Values and Pseudo-Inverse of a Matrix
- Author
-
Gene H. Golub and William Kahan
- Subjects
Matrix (mathematics) ,Singular value ,Diagonal form ,Bidiagonalization ,Diagonal ,Mathematical analysis ,Applied mathematics ,Bidiagonal matrix ,Unitary matrix ,Moore–Penrose pseudoinverse ,Mathematics - Abstract
A numerically stable and fairly fast scheme is described to compute the unitary matrices U and V which transform a given matrix A into a diagonal form $\Sigma = U^ * AV$, thus exhibiting A’s singular values on $\Sigma $’s diagonal. The scheme first transforms A to a bidiagonal matrix J, then diagonalizes J. The scheme described here is complicated but does not suffer from the computational difficulties which occasionally afflict some previously known methods. Some applications are mentioned, in particular the use of the pseudo-inverse $A^I = V\Sigma ^I U^* $ to solve least squares problems in a way which dampens spurious oscillation and cancellation.
- Published
- 1965
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.