20 results on '"Bidiagonalization"'
Search Results
2. On partial least‐squares estimation in scalar‐on‐function regression models.
- Author
-
Saricam, Semanur, Beyaztas, Ufuk, Asikgil, Baris, and Shang, Han Lin
- Subjects
- *
PARTIAL least squares regression , *REGRESSION analysis , *MONTE Carlo method , *LEAST squares - Abstract
Scalar‐on‐function regression, where the response is scalar valued and the predictor consists of random functions, is one of the most important tools for exploring the functional relationship between a scalar response and functional predictor(s). The functional partial least‐squares method improves estimation accuracy for estimating the regression coefficient function compared to other existing methods, such as least squares, maximum likelihood, and maximum penalized likelihood. The functional partial least‐squares method is often based on the SIMPLS or NIPALS algorithm, but these algorithms can be computationally slow for analyzing a large dataset. In this study, we propose two modified functional partial least‐squares methods to efficiently estimate the regression coefficient function under the scalar‐on‐function regression. In the proposed methods, the infinite‐dimensional functional predictors are first projected onto a finite‐dimensional space using a basis expansion method. Then, two partial least‐squares algorithms, based on re‐orthogonalization of the score and loading vectors, are used to estimate the linear relationship between scalar response and the basis coefficients of the functional predictors. The finite‐sample performance and computing speed are evaluated using a series of Monte Carlo simulation studies and a sugar process dataset. In this paper, two modified functional partial least‐squares methods, based on re‐orthogonalization of the score and loading vectors, are proposed to efficiently estimate the regression coefficient function. In the methods, the functional predictor is first decomposed by a basis expansion method. Then, partial least‐squares algorithms are used to estimate the relationship between scalar response and the basis coefficients of functional predictor. Empirical performance of the proposed methods is evaluated using Monte Carlo simulations and an empirical data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Dual quaternion singular value decomposition based on bidiagonalization to a dual number matrix using dual quaternion householder transformations.
- Author
-
Ding, Wenxv, Li, Ying, Wang, Tao, and Wei, Musheng
- Subjects
- *
QUATERNIONS , *SINGULAR value decomposition , *QUATERNION functions , *MATRICES (Mathematics) - Abstract
We propose a practical method for computing the singular value decomposition of dual quaternion matrices. The dual quaternion Householder matrix is first proposed, and by combining the properties of dual quaternions, we can implement the transformation of a dual quaternion matrix to a bidiagonalized dual number matrix. We have proven that the singular values of a dual quaternion matrix are same to the singular values of its bidiagonalized dual number matrix. Numerical experiment demonstrates the effectiveness of our proposed method for computing the singular value decomposition of dual quaternion matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. SPMR: A FAMILY OF SADDLE-POINT MINIMUM RESIDUAL SOLVERS.
- Author
-
ESTRIN, RON and GREIF, CHEN
- Subjects
- *
SADDLEPOINT approximations , *ITERATIVE methods (Mathematics) - Abstract
We introduce a new family of saddle-point minimum residual methods for iteratively solving saddle-point systems using a minimum or quasi-minimum residual approach. No symmetry assumptions are made. The basic mechanism underlying the method is a novel simultaneous bidiagonalization procedure that yields a simplified saddle-point matrix on a projected Krylov-like subspace and allows for a monotonic short-recurrence iterative scheme. We develop a few variants, demonstrate the advantages of our approach, derive optimality conditions, and discuss connections to existing methods. Numerical experiments illustrate the merits of this new family of methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. An SVD Processor Based on Golub–Reinsch Algorithm for MIMO Precoding With Adjustable Precision.
- Author
-
Wu, Chun-Hun and Tsai, Pei-Yun
- Subjects
- *
RAYLEIGH quotient , *AMPLITUDE modulation , *JACOBIAN matrices , *ALGORITHMS - Abstract
A singular-value-decomposition (SVD) processor having adjustable precision for $8 \times 8$ multiple-input multiple-output precoding systems supporting up to 256-quadrature amplitude modulation (QAM) is designed and implemented. The memory-based architecture that consists of eight processing elements, each having two coordinate rotation digital computer modules, is employed. Golub–Reinsch SVD (GR-SVD) algorithm with Rayleigh quotient shift is used. Thus, two-phase operations are performed including bidiagonalization and implicit shifted QR for diagonalization. The split, deflation, and shift techniques of GR-SVD are helpful to accelerate the diagonalization and reduce the computation complexity. To cover the precision requirements for compact 256-QAM constellation and spatially correlated channels, hybrid datapath representations are used. The threshold for split and deflation can be adjusted and thus the precision of the SVD processor is variable according to the requirements. Although the high precision results in a large gate count, this SVD processor in 40-nm CMOS technology can complete the decomposition of an $8 \times 8$ matrix in 313 clock cycles averagely and is able to provide a throughput rate of 591K matrix/s with good power efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Global LSMR(Gl-LSMR) method for solving general linear systems with several right-hand sides.
- Author
-
Mojarrab, M. and Toutounian, F.
- Subjects
- *
LINEAR systems , *ALGORITHMS , *FINITE volume method , *MATHEMATICAL models , *MATHEMATICAL analysis - Abstract
The global solvers are an attractive class of iterative solvers for solving linear systems with multiple right-hand sides. In this paper, first, a new global method for solving general linear systems with several right-hand sides is presented. This method is the global version of the LSMR algorithm presented by Fong and Saunders (2011). Then, some theoretical properties of the new method are discussed. Finally, numerical experiments from real applications are used to confirm the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. Fast and stable partial least squares modelling: A benchmark study with theoretical comments.
- Author
-
Björck, Åke and Indahl, Ulf G.
- Subjects
- *
CHEMOMETRICS , *KERNEL functions , *ALGORITHMS , *ORTHOGONALIZATION - Abstract
Algorithms for partial least squares (PLS) modelling are placed into a sound theoretical context focusing on numerical precision and computational efficiency. NIPALS and other PLS algorithms that perform deflation steps of the predictors ( X) may be slow or even computationally infeasible for sparse and/or large-scale data sets. As alternatives, we develop new versions of the Bidiag1 and Bidiag2 algorithms. These include full reorthogonalization of both score and loading vectors, which we consider to be both necessary and sufficient for numerical precision. Using a collection of benchmark data sets, these 2 new algorithms are compared to the NIPALS PLS and 4 other PLS algorithms acknowledged in the chemometrics literature. The provably stable Householder algorithm for PLS regression is taken as the reference method for numerical precision. Our conclusion is that our new Bidiag1 and Bidiag2 algorithms are the methods of choice for problems where both efficiency and numerical precision are important. When efficiency is not urgent, the NIPALS PLS and the Householder PLS are also good choices. The benchmark study shows that SIMPLS gives poor numerical precision even for a small number of factors. Further, the nonorthogonal scores PLS, direct scores PLS, and the improved kernel PLS are demonstrated to be numerically less stable than the best algorithms. Prototype MATLAB codes are included for the 5 PLS algorithms concluded to be numerically stable on our benchmark data sets. Other aspects of PLS modelling, such as the evaluation of the regression coefficients, are also analyzed using techniques from numerical linear algebra. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
8. ASSESSMENT OF DIFFERENT PARTIAL LEAST SQUARES VARIANTS FOR DETERMINATION OF BINARY-DRUG SYSTEM EXHIBITING INTENSE SPECTRAL OVERLAP.
- Author
-
Fasfous, Ismail I., Al-Degs, Yahya S., and Mallah, Asmaa M.
- Subjects
- *
AMOXICILLIN , *CLAVULANIC acid , *SOLID dosage forms , *EXCIPIENTS , *CRYSTALLIZATION , *BACTERIAL cell walls , *THERAPEUTICS - Abstract
Amoxicillin (AMO) and clavulanic acid (CLA) are popular activate pharmaceutical ingredients that are widely used due to their efficient medical activity. However, this binary system suffers from intense spectral overlap (93.6%). Inspite of the intense spectral overlap and serious nonlinearity in the current system, both drugs were accurately quantified by multivariate calibration. The performance of different partial least squares PLS variants (NIPALS, SIMPLS, Kernel and Bidiagonalization) for accurate quantification of AMO -CLA in commercial formulation was outlined. Partial response and partial residual plots confirmed a serious nonlinearity in the binary system. Compared to other algorithms, PLS-Kernel exhibited a better performance for drugs quantification and seven latent variables were necessary for accurate quantification: 94.0(9.6%) and 95.6(5.2%) for AMO and CLA, respectively. The intense spectral overlap, nonlinearity, and non-modelled excipients are effectively handled by PLS-Kernel calibration. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. The geometry of PLS1 explained properly: 10 key notes on mathematical properties of and some alternative algorithmic approaches to PLS1 modelling.
- Author
-
Indahl, Ulf G.
- Subjects
- *
LEAST squares , *ALGORITHMS , *LINEAR algebra , *GRAM-Schmidt process - Abstract
The insight from, and conclusions of this paper motivate efficient and numerically robust 'new' variants of algorithms for solving the single response partial least squares regression (PLS1) problem. Prototype MATLAB code for these variants are included in the Appendix. The analysis of and conclusions regarding PLS1 modelling are based on a rich and nontrivial application of numerous key concepts from elementary linear algebra. The investigation starts with a simple analysis of the nonlinear iterative partial least squares (NIPALS) PLS1 algorithm variant computing orthonormal scores and weights. A rigorous interpretation of the squared P-loadings as the variable-wise explained sum of squares is presented. We show that the orthonormal row-subspace basis of W-weights can be found from a recurrence equation. Consequently, the NIPALS deflation steps of the centered predictor matrix can be replaced by a corresponding sequence of Gram-Schmidt steps that compute the orthonormal column-subspace basis of T-scores from the associated non-orthogonal scores. The transitions between the non-orthogonal and orthonormal scores and weights (illustrated by an easy-to-grasp commutative diagram), respectively, are both given by QR factorizations of the non-orthogonal matrices. The properties of singular value decomposition combined with the mappings between the alternative representations of the PLS1 'truncated' X data (including P t W) are taken to justify an invariance principle to distinguish between the PLS1 truncation alternatives. The fundamental orthogonal truncation of PLS1 is illustrated by a Lanczos bidiagonalization type of algorithm where the predictor matrix deflation is required to be different from the standard NIPALS deflation. A mathematical argument concluding the PLS1 inconsistency debate (published in 2009 in this journal) is also presented. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. STABILITY OF TWO DIRECT METHODS FOR BIDIAGONALIZATION AND PARTIAL LEAST SQUARES.
- Author
-
BJÖRCK, ÅKE
- Subjects
- *
STABILITY theory , *LEAST squares , *MATHEMATICAL sequences , *APPROXIMATION theory , *PROBLEM solving , *PSEUDOINVERSES - Abstract
The partial least squares (PLS) method computes a sequence of approximate solutions xk ∈.; Kk(ATA,ATb), k = 1, 2, . . ., to the least squares problem minx ∥Ax - b∥2. If carried out to completion, the method always terminates with the pseudoinverse solution x = A+b. Two direct PLS algorithms are analyzed. The first uses the Golub-Kahan Householder algorithm for reducing A to upper bidiagonal form. The second is the NIPALS PLS algorithm, due to Wold et al., which is based on rank-reducing orthogonal projections. The Householder algorithm is known to be mixed forward-backward stable. Numerical results are given, that support the conjecture that the NIPALS PLS algorithm shares this stability property. We draw attention to a flaw in some descriptions and implementations of this algorithm, related to a similar problem in Gram-Schmidt orthogonalization, that spoils its otherwise excellent stability. For large-scale sparse or structured problems, the iterative algorithm LSQR is an attractive alternative, provided an implementation with reorthogonalization is used. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. STABLE COMPUTATION OF THE CS DECOMPOSITION: SIMULTANEOUS BIDIAGONALIZATION.
- Author
-
Sutton, Brian D.
- Subjects
- *
SINGULAR value decomposition , *MATRICES (Mathematics) , *MATHEMATICAL analysis , *EIGENVALUES , *ALGORITHMS , *MATHEMATICAL proofs - Abstract
Since its discovery in 1977, the CS decomposition (CSD) has resisted computation, even though it is a sibling of the well-understood eigenvalue and singular value decompositions. Several algorithms have been developed for the reduced 2-by-1 form of the decomposition, but none have been extended to the complete 2-by-2 form of the decomposition in Stewart's original paper. In this article, we present an algorithm for simultaneously bidiagonalizing the four blocks of a unitary matrix partitioned into a 2-by-2 block structure. This serves as the first, direct phase of a two-stage algorithm for the CSD, much as Golub--Kahan--Reinsch bidiagonalization serves as the first stage in computing the singular value decomposition. Backward stability is proved. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
12. Accelerating the reduction to upper Hessenberg, tridiagonal, and bidiagonal forms through hybrid GPU-based computing
- Author
-
Tomov, Stanimire, Nath, Rajib, and Dongarra, Jack
- Subjects
- *
PARALLEL algorithms , *DIMENSION reduction (Statistics) , *MATHEMATICAL forms , *LINEAR accelerators , *GRAPHICS processing units , *HYBRID systems , *COMPUTER systems - Abstract
Abstract: We present a Hessenberg reduction (HR) algorithm for hybrid systems of homogeneous multicore with GPU accelerators that can exceed 25× the performance of the corresponding LAPACK algorithm running on current homogeneous multicores. This enormous acceleration is due to proper matching of algorithmic requirements to architectural strengths of the system’s hybrid components. The results described in this paper are significant because the HR has not been properly accelerated before on homogeneous multicore architectures, and it plays a significant role in solving non-symmetric eigenvalue problems. Moreover, the ideas from the hybrid HR are used to develop a hybrid tridiagonal reduction algorithm (for symmetric eigenvalue problems) and a bidiagonal reduction algorithm (for singular value decomposition problems). Our approach demonstrates a methodology that streamlines the development of a large and important class of algorithms on modern computer architectures of multicore and GPUs. The new algorithms can be directly used in the software stack that relies on LAPACK. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
13. A USEFUL FORM OF UNITARY MATRIX OBTAINED FROM ANY SEQUENCE OF UNIT 2-NORM n-VECTORS.
- Author
-
Paige, Christopher C.
- Subjects
- *
ORTHOGONALIZATION , *ALGORITHMS , *FACTORIZATION , *MATRICES (Mathematics) , *NUMERICAL analysis - Abstract
Charles Sheffield pointed out that the modified Gram-Schmidt (MGS) orthogonalization algorithm for the QR factorization of B ϵ ℝn×k is mathematically equivalent to the QR factorization applied to the matrix B augmented with a k × k matrix of zero elements on top. This is true in theory for any method of QR factorization, but for Householder's method it is true in the presence of rounding errors as well. This knowledge has been the basis for several successful but difficult rounding error analyses of algorithms which in theory produce orthogonal vectors but significantly fail to do so because of rounding errors. Here we show that the same results can be found more directly and easily without recourse to the MGS connection. It is shown that for any sequence of k unit 2-norm n-vectors there is a special (n+k)-square unitary matrix which we call a unitary augmentation of these vectors and that this matrix can be used in the analyses without appealing to the MGS connection. We describe the connection of this unitary matrix to Householder matrices. The new approach is applied to an earlier analysis to illustrate both the improvement in simplicity and advantages for future analyses. Some properties of this unitary matrix are derived. The main theorem on orthogonalization is then extended to cover the case of biorthogonalization. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
14. Cache Efficient Bidiagonalization Using BLAS 2.5 Operators.
- Author
-
HOWELL, GARY W., DEMMEL, JAMES W., FULTON, CHARLES T., HAMMARLING, SVEN, and MARMOL, KAREN
- Subjects
- *
COMPUTER systems , *COMPUTER input-output equipment , *COMPUTER science , *COMPUTER architecture , *ALGORITHMS , *MATRICES (Mathematics) , *COMPUTER programming - Abstract
On cache based computer architectures using current standard algorithms, Householder bidiagonalization requires a significant portion of the execution time for computing matrix singular values and vectors. In this paper we reorganize the sequence of operations for Householder bidiagonalization of a general m x n matrix, so that two (_GEMV) vector-matrix multiplications can be done with one pass of the unreduced trailing part of the matrix through cache. Two new BLAS operations approximately cut in half the transfer of data from main memory to cache, reducing execution times by up to 25 per cent. We give detailed algorithm descriptions and compare timings with the current LAPACK bidiagonalization algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
15. BLOCK AND PARALLEL VERSIONS OF ONE-SIDED BIDIAGONALIZATION.
- Author
-
Bosner, Nela and Barlow, Jesse L.
- Subjects
- *
ALGORITHMS , *SINGULAR value decomposition , *NUMERICAL analysis , *MATHEMATICAL statistics , *CHEMICAL decomposition , *ERROR analysis in mathematics - Abstract
Two new algorithms for one-sided bidiagonalization are presented. The first is a block version which improves execution time by improving cache utilization from the use of BLAS 2.5 operations and more BLAS 3 operations. The second is adapted to parallel computation. When incorporated into singular value decomposition software, the second algorithm is faster than the corresponding ScaLAPACK routine in most cases. An error analysis is presented for the first algorithm. Numerical results and timings are presented for both algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
16. Global least squares method (Gl-LSQR) for solving general linear systems with several right-hand sides
- Author
-
Toutounian, F. and Karimi, S.
- Subjects
- *
STATISTICAL correlation , *ESTIMATION theory , *LEAST squares , *MATHEMATICAL statistics - Abstract
Abstract: In this paper, we propose a new method for solving general linear systems with several right-hand sides. This method is based on global least squares method and reduces the original matrix to the lower bidiagonal form. We derive a simple recurrence formula for generating the sequence of approximate solutions {X k }. Some theoretical properties of the new method are discussed and we also show that how this method can be implemented for the sylvester equation. Finally, some numerical experiments on test matrices are presented to show the efficiency of the new method. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
17. The block least squares method for solving nonsymmetric linear systems with multiple right-hand sides
- Author
-
Karimi, S. and Toutounian, F.
- Subjects
- *
STATISTICAL correlation , *CURVE fitting , *ESTIMATION theory , *MATHEMATICAL statistics - Abstract
Abstract: In this paper, we present the block least squares method for solving nonsymmetric linear systems with multiple right-hand sides. This method is based on the block bidiagonalization. We first derive two algorithms by using two different convergence criteria. The first one is based on independently minimizing the 2-norm of each column of the residual matrix and the second approach is based on minimizing the Frobenius norm of residual matrix. We then give some properties of these new algorithms. Finally, some numerical experiments on test matrices from Harwell–Boeing collection are presented to show the efficiency of the new method. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
18. CORE PROBLEMS IN LINEAR ALGEBRAIC SYSTEMS.
- Author
-
Paige, Christopher C. and Strakoš, Zdeněk
- Subjects
- *
LINEAR systems , *SYSTEMS theory , *MATRICES (Mathematics) , *LEAST squares , *LINEAR algebra , *MATHEMATICAL statistics - Abstract
For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved by first finding a core problem, we show the resulting theory is consistent with earlier generalizations, but much simpler and clearer. The approach is important for other related solutions and leads, for example, to an elegant solution to the data least squares problem. The ideas could be useful for solving ill-posed problems. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
19. SVD COMPUTATIONS ON THE CONNECTION MACHINE CM-5/CM-5E: IMPLEMENTATION AND ACCURACY.
- Author
-
Balle, Susanne M. and Pedersen, Palle M.
- Subjects
- *
PARALLEL computers , *ALGORITHMS , *CONNECTION machines , *EIGENVECTORS , *MATRICES (Mathematics) , *PERMUTATIONS , *MATHEMATICS - Abstract
The singular value decomposition (SVD) plays an essential role in many applications. One technique for obtaining the SVD of a real dense matrix is to first reduce the dense matrix to bidiagonal form and then compute the SVD of the bidiagonal matrix. Guidelines followed to implement this approach efficiently on the Connection Machine CM-5/CM-5E are described. Timing results show that use of the described techniques yields up to 44% of peak performance in the reduction from dense to bidiagonal form. The singular values are achieved by applying the parallel bisection algorithm to the symmetric tridiagonal matrix obtained by doing a perfect-shuffle permutation of the rows and columns of [[0 BT ]; [B 0]], where B is the bidiagonal matrix. The singular vectors are computed using inverse iteration on the symmetric tridiagonal matrix. SVD experiments done on bidiagonal matrices are presented and illustrate that this approach yields very good performance as well as orthogonal singular vectors even for ill-conditioned matrices as long as the singular values are computed to very high accuracy. The main conclusion is that extra accuracy in the eigenvalue computation very often enhance orthogonality of the eigenvectors computed with inverse iteration to the point where reorthogonalization is not needed. The conclusions presented regarding the SVD's numerical properties are general to the algorithm and not specific to the Connection Machine implementation. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
20. Approximating minimum norm solutions of rank-deficient least squares problems.
- Author
-
Dax, Achiya and Eldén, Lars
- Subjects
- *
LEAST squares , *MATHEMATICS , *MATHEMATICAL statistics , *GEODESY , *ESTIMATION theory , *LINEAR algebra - Abstract
In this paper we consider the solution of linear least squares problems minx∥Ax - b∥22 where the matrix A ∈ Rm × n is rank deficient. Put p = min{m, n}, let σi, i = 1, 2, . . . , p, denote the singular values of A, and let ui and vi denote the corresponding left and right singular vectors. Then the minimum norm solution of the least squares problem has the form x* = ∫ri = 1(uTib/σi)vi, where r ≤ p is the rank of A. The Riley–Golub iteration, xk + 1 = arg minx{∥Ax - b∥22 + λ∥x - xk∥22} converges to the minimum norm solution if x0 is chosen equal to zero. The iteration is implemented so that it takes advantage of a bidiagonal decomposition of A. Thus modified, the iteration requires only O(p) flops (floating point operations). A further gain of using the bidiagonalization of A is that both the singular values σi and the scalar products uTib can be computed at marginal extra cost. Moreover, we determine the regularization parameter, λ, and the number of iterations, k, in a way that minimizes the difference x* - xk with respect to a certain norm. Explicit rules are derived for calculating these parameters. One advantage of our approach is that the numerical rank can be easily determined by using the singular values. Furthermore, by the iterative procedure, x* is approximated without computing the singular vectors of A. This gives a fast and reliable method for approximating minimum norm solutions of well-conditioned rank-deficient least squares problems. Numerical experiments illustrate the viability of our ideas, and demonstrate that the new method gives more accurate approximations than an approach based on a QR decomposition with column pivoting. © 1998 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.