3,219 results on '"Low-rank approximation"'
Search Results
2. A graphics processing unit accelerated sparse direct solver and preconditioner with block low rank compression
- Author
-
Claus, Lisa, Ghysels, Pieter, Boukaram, Wajih Halim, and Li, Xiaoye Sherry
- Subjects
Information and Computing Sciences ,Applied Computing ,Computer Vision and Multimedia Computation ,Linear solvers ,preconditioning ,low-rank approximation ,sparse direct solver ,multifrontal method ,Distributed Computing ,Applied computing ,Distributed computing and systems software - Abstract
We present the GPU implementation efforts and challenges of the sparse solver package STRUMPACK. The code is made publicly available on github with a permissive BSD license. STRUMPACK implements an approximate multifrontal solver, a sparse LU factorization which makes use of compression methods to accelerate time to solution and reduce memory usage. Multiple compression schemes based on rank-structured and hierarchical matrix approximations are supported, including hierarchically semi-separable, hierarchically off-diagonal butterfly, and block low rank. In this paper, we present the GPU implementation of the block low rank (BLR) compression method within a multifrontal solver. Our GPU implementation relies on highly optimized vendor libraries such as cuBLAS and cuSOLVER for NVIDIA GPUs, rocBLAS and rocSOLVER for AMD GPUs and the Intel oneAPI Math Kernel Library (oneMKL) for Intel GPUs. Additionally, we rely on external open source libraries such as SLATE (Software for Linear Algebra Targeting Exascale), MAGMA (Matrix Algebra on GPU and Multi-core Architectures), and KBLAS (KAUST BLAS). SLATE is used as a GPU-capable ScaLAPACK replacement. From MAGMA we use variable sized batched dense linear algebra operations such as GEMM, TRSM and LU with partial pivoting. KBLAS provides efficient (batched) low rank matrix compression for NVIDIA GPUs using an adaptive randomized sampling scheme. The resulting sparse solver and preconditioner runs on NVIDIA, AMD and Intel GPUs. Interfaces are available from PETSc, Trilinos and MFEM, or the solver can be used directly in user code. We report results for a range of benchmark applications, using the Perlmutter system from NERSC, Frontier from ORNL, and Aurora from ALCF. For a high frequency wave equation on a regular mesh, using 32 Perlmutter compute nodes, the factorization phase of the exact GPU solver is about 6.5× faster compared to the CPU-only solver. The BLR-enabled GPU solver is about 13.8× faster than the CPU exact solver. For a collection of SuiteSparse matrices, the STRUMPACK exact factorization on a single GPU is on average 1.9× faster than NVIDIA’s cuDSS solver.
- Published
- 2024
3. Nonnegative low multi‐rank third‐order tensor approximation via transformation.
- Author
-
Song, Guang‐Jing, Hu, Yexun, Xu, Cobi, and Ng, Michael K.
- Subjects
- *
UNITARY transformations , *FOURIER transforms , *ALGORITHMS , *MATRICES (Mathematics) - Abstract
The main aim of this paper is to develop a new algorithm for computing a nonnegative low multi‐rank tensor approximation for a nonnegative tensor. In the literature, there are several nonnegative tensor factorizations or decompositions, and their approaches are to enforce the nonnegativity constraints in the factors of tensor factorizations or decompositions. In this paper, we study nonnegativity constraints in tensor entries directly, and a low rank approximation for the transformed tensor by using discrete Fourier transformation matrix, discrete cosine transformation matrix, or unitary transformation matrix. This strategy is particularly useful in imaging science as nonnegative pixels appear in tensor entries and a low rank structure can be obtained in the transformation domain. We propose an alternating projections algorithm for computing such a nonnegative low multi‐rank tensor approximation. The convergence of the proposed projection method is established. Numerical examples for multidimensional images are presented to demonstrate that the performance of the proposed method is better than that of nonnegative low Tucker rank tensor approximation and the other nonnegative tensor factorizations and decompositions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Latent Archetypes of the Spatial Patterns of Cancer.
- Author
-
Menezes, Thaís Pacheco, Prates, Marcos Oliveira, Assunção, Renato, and De Castro, Mônica Silva Monteiro
- Subjects
- *
SINGULAR value decomposition , *MATRIX decomposition , *NONNEGATIVE matrices , *EPIDEMIOLOGY , *DISEASE risk factors - Abstract
The cancer atlas edited by several countries is the main resource for the analysis of the geographic variation of cancer risk. Correlating the observed spatial patterns with known or hypothesized risk factors is time‐consuming work for epidemiologists who need to deal with each cancer separately, breaking down the patterns according to sex and race. The recent literature has proposed to study more than one cancer simultaneously looking for common spatial risk factors. However, this previous work has two constraints: they consider only a very small (2–4) number of cancers previously known to share risk factors. In this article, we propose an exploratory method to search for latent spatial risk factors of a large number of supposedly unrelated cancers. The method is based on the singular value decomposition and nonnegative matrix factorization, it is computationally efficient, scaling easily with the number of regions and cancers. We carried out a simulation study to evaluate the method's performance and apply it to cancer atlas from the USA, England, France, Australia, Spain, and Brazil. We conclude that with very few latent maps, which can represent a reduction of up to 90% of atlas maps, most of the spatial variability is conserved. By concentrating on the epidemiological analysis of these few latent maps a substantial amount of work is saved and, at the same time, high‐level explanations affecting many cancers simultaneously can be reached. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Fourth-Order Tensorial Wiener Filter Using the Conjugate Gradient Method.
- Author
-
Dogariu, Laura-Maria, Costea, Ruxandra-Liana, Paleologu, Constantin, and Benesty, Jacob
- Abstract
The recently developed iterative Wiener filter using a fourth-order tensorial (FOT) decomposition owns appealing performance in the identification of long length impulse responses. It relies on the nearest Kronecker product representation (with particular intrinsic symmetry features), together with low-rank approximations. Nevertheless, this new iterative filter requires matrix inversion operations when solving the Wiener–Hopf equations associated with the component filters. In this communication, we propose a computationally efficient version that relies on the conjugate gradient (CG) method for solving these sets of equations. The proposed solution involves a specific initialization of the component filters and sequential connections between the CG cycles. Different FOT-based decomposition setups are also analyzed from the point of view of the resulting parameter space. Experimental results obtained in the context of echo cancellation confirm the good behavior of the proposed approach and its superiority in comparison to the conventional Wiener filter and other decomposition-based versions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Improving Brain Metabolite Detection with a Combined Low-Rank Approximation and Denoising Diffusion Probabilistic Model Approach.
- Author
-
Jeon, Yeong-Jae, Nam, Kyung Min, Park, Shin-Eui, and Baek, Hyeon-Man
- Abstract
In vivo proton magnetic resonance spectroscopy (MRS) is a noninvasive technique for monitoring brain metabolites. However, it is challenged by a low signal-to-noise ratio (SNR), often necessitating extended scan times to compensate. One of the conventional techniques for noise reduction is signal averaging, which is inherently time-consuming and can lead to participant discomfort, thus posing limitations in clinical settings. This study aimed to develop a hybrid denoising strategy that integrates low-rank approximation and denoising diffusion probabilistic model (DDPM) to enhance MRS data quality and shorten scan times. Using publicly available 1H MRS datasets from 15 subjects, we applied the Casorati SVD and DDPM to obtain baseline and functional data during a pain stimulation task. This method significantly improved SNR, resulting in outcomes comparable to or better than averaging over 32 signals. It also provided the most consistent metabolite measurements and adequately tracked temporal changes in glutamate levels, correlating with pain intensity ratings after heating. These findings demonstrate that our approach enhances MRS data quality, offering a more efficient alternative to conventional methods and expanding the potential for the real-time monitoring of neurochemical changes. This contribution has the potential to advance MRS techniques by integrating advanced denoising methods to increase the acquisition speed and enhance the precision of brain metabolite analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Low-Rank Approximation Reconstruction of Five-Dimensional Seismic Data.
- Author
-
Chen, Gui, Liu, Yang, Zhang, Mi, Sun, Yuhang, and Zhang, Haoran
- Subjects
- *
MATRIX decomposition , *COMPLETE graphs , *SENSITIVITY analysis , *ALGORITHMS , *LOW-rank matrices - Abstract
Low-rank approximation has emerged as a promising technique for recovering five-dimensional (5D) seismic data, yet the quest for higher accuracy and stronger rank robustness remains a critical pursuit. We introduce a low-rank approximation method by leveraging the complete graph tensor network (CGTN) decomposition and the learnable transform (LT), referred to as the LRA-LTCGTN method, to simultaneously denoise and reconstruct 5D seismic data. In the LRA-LTCGTN framework, the LT is employed to project the frequency tensor of the original 5D data onto a small-scale latent space. Subsequently, the CGTN decomposition is executed on this latent space. We adopt the proximal alternating minimization algorithm to optimize each variable. Both 5D synthetic data and field data examples indicate that the LRA-LTCGTN method exhibits notable advantages and superior efficiency compared to the damped rank-reduction (DRR), parallel matrix factorization (PMF), and LRA-CGTN methods. Moreover, a sensitivity analysis underscores the remarkably stronger robustness of the LRA-LTCGTN method in terms of rank without any optimization procedure with respect to rank, compared to the LRA-CGTN method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Geodesic Convexity of the Symmetric Eigenvalue Problem and Convergence of Steepest Descent.
- Author
-
Alimisis, Foivos and Vandereycken, Bart
- Subjects
- *
RAYLEIGH quotient , *GRASSMANN manifolds , *SYMMETRIC matrices , *GEODESICS , *EIGENVALUES - Abstract
We study the convergence of the Riemannian steepest descent algorithm on the Grassmann manifold for minimizing the block version of the Rayleigh quotient of a symmetric matrix. Even though this problem is non-convex in the Euclidean sense and only very locally convex in the Riemannian sense, we discover a structure for this problem that is similar to geodesic strong convexity, namely, weak-strong convexity. This allows us to apply similar arguments from convex optimization when studying the convergence of the steepest descent algorithm but with initialization conditions that do not depend on the eigengap δ . When δ > 0 , we prove exponential convergence rates, while otherwise the convergence is algebraic. Additionally, we prove that this problem is geodesically convex in a neighbourhood of the global minimizer of radius O (δ) . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Collocation methods for nonlinear differential equations on low-rank manifolds.
- Author
-
Dektor, Alec
- Subjects
- *
INTERPOLATION spaces , *NONLINEAR differential equations , *PARTIAL differential equations , *DIFFERENTIAL equations , *TIME integration scheme - Abstract
We introduce new methods for integrating nonlinear differential equations on low-rank manifolds. These methods rely on interpolatory projections onto the tangent space, enabling low-rank time integration of vector fields that can be evaluated entry-wise. A key advantage of our approach is that it does not require the vector field to exhibit low-rank structure, thereby overcoming significant limitations of traditional dynamical low-rank methods based on orthogonal projection. To construct the interpolatory projectors, we develop a sparse tensor sampling algorithm based on the discrete empirical interpolation method (DEIM) that parameterizes tensor train manifolds and their tangent spaces with cross interpolation. Using these projectors, we propose two time integration schemes on low-rank tensor train manifolds. The first scheme integrates the solution at selected interpolation indices and constructs the solution with cross interpolation. The second scheme generalizes the well-known orthogonal projector-splitting integrator to interpolatory projectors. We demonstrate the proposed methods with applications to several tensor differential equations arising from the discretization of partial differential equations. • New tensor train sampling algorithm. • New oblique projection onto tangent spaces of tensor train manifolds. • Novel dynamical low-rank methods for nonlinear tensor differential equations. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Classification of Time–Frequency Maps of Guided Waves Using Foreground Extraction.
- Author
-
Guerra-Bravo, Esteban, Baltazar, Arturo, Balvantin, Antonio, and Aranda-Sanchez, Jorge I.
- Subjects
- *
STRUCTURAL health monitoring , *SINGULAR value decomposition , *LAMB waves , *CLASSIFICATION , *PRINCIPAL components analysis , *SOIL vibration - Abstract
Guided waves propagating in mechanical structures have proved to be an essential technique for applications, such as structural health monitoring. However, it is a well-known problem that when using non-stationary guided wave signals, dispersion, and high-order vibrational modes are excited, it becomes cumbersome to detect and identify relevant information. A typical method for the characterization of these non-stationary signals is based on time–frequency (TF) mapping techniques. This method produces 2D images, allowing the study of specific vibration modes and their evolution over time. However, this approach has low resolution, increases the size of the data, and introduces redundant information, making it difficult to extract relevant features for their accurate identification and classification. This paper presents a method for identifying discontinuities by analyzing the data in the TF maps of Lamb wave signals. Singular Value Decomposition (SVD) for low-rank optimization and then perform foreground feature extraction on the maps were proposed. These foreground features are then analyzed using Principal Component Analysis (PCA). Unlike traditional PCA, which operates on vectorized images, our approach focuses on the correlation between coordinates within the maps. This modification enhances feature detection and enables the classification of discontinuities within the maps. To evaluate unsupervised clustering of the dimensionally reduced data obtained from PCA, we experimentally tested our method using broadband Lamb waves with various vibrational modes interacting with different types of discontinuity patterns in a thin aluminum plate. A Support Vector Machine (SVM) classifier was then implemented for classification. The results of the experimental data yielded good classification effectiveness within reasonably low computational time despite the large matrixes of the TF maps used. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Towards efficient and accurate approximation: tensor decomposition based on randomized block Krylov iteration.
- Author
-
Qiu, Yichun, Sun, Weijun, Zhou, Guoxu, and Zhao, Qibin
- Abstract
Tensor decomposition methods are inefficient when dealing with low-rank approximation of large-scale data. Randomized tensor decomposition has emerged to meet this need, but most existing methods exhibit high computational costs in handling large-scale tensors and poor approximation accuracy in noisy data scenarios. In this work, a Tucker decomposition method based on randomized block Krylov iteration (rBKI-TK) is proposed to reduce computational complexity and guarantee approximation accuracy by employing cumulative sketches rather than randomized initialization to construct a better projection space with fewer iterations. In addition, a hierarchical tensor ring decomposition based on rBKI-TK is proposed to enhance the scalability of the rBKI-TK method. Numerical results demonstrate the efficiency and accuracy of the proposed methods in large-scale and noisy data processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. LOW-RANK PLUS DIAGONAL APPROXIMATIONS FOR RICCATI-LIKE MATRIX DIFFERENTIAL EQUATIONS.
- Author
-
BONNABEL, SILVÈRE, LAMBERT, MARC, and BACH, FRANCIS
- Subjects
- *
LOW-rank matrices , *RICCATI equation , *DIFFERENTIAL equations , *KALMAN filtering , *BAYESIAN field theory - Abstract
We consider the problem of computing tractable approximations of time-dependent dxd large positive semidefinite (PSD) matrices defined as solutions of a matrix differential equation. We propose to use "low-rank plus diagonal" PSD matrices as approximations that can be stored with a memory cost being linear in the high dimension d. To constrain the solution of the differential equation to remain in that subset, we project the derivative at all times onto the tangent space to the subset, following the methodology of dynamical low-rank approximation. We derive a closedform formula for the projection and show that after some manipulations, it can be computed with a numerical cost being linear in d, allowing for tractable implementation. Contrary to previous approaches based on pure low-rank approximations, the addition of the diagonal term allows for our approximations to be invertible matrices that can moreover be inverted with linear cost in d. We apply the technique to Riccati-like equations, then to two particular problems: first, a lowrank approximation to our recent Wasserstein gradient flow for Gaussian approximation of posterior distributions in approximate Bayesian inference and, second, a novel low-rank approximation of the Kalman filter for high-dimensional systems. Numerical simulations illustrate the results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. LoRA-TV: read depth profile-based clustering of tumor cells in single-cell sequencing.
- Author
-
Duan, Junbo, Zhao, Xinrui, and Wu, Xiaoming
- Subjects
- *
DEPTH profiling , *ROBUST optimization - Abstract
Single-cell sequencing has revolutionized our ability to dissect the heterogeneity within tumor populations. In this study, we present LoRA-TV (Low Rank Approximation with Total Variation), a novel method for clustering tumor cells based on the read depth profiles derived from single-cell sequencing data. Traditional analysis pipelines process read depth profiles of each cell individually. By aggregating shared genomic signatures distributed among individual cells using low-rank optimization and robust smoothing, the proposed method enhances clustering performance. Results from analyses of both simulated and real data demonstrate its effectiveness compared with state-of-the-art alternatives, as supported by improvements in the adjusted Rand index and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. An inherently parallel ℋ 2 -ULV factorization for solving dense linear systems on GPUs.
- Author
-
Ma, Qianxiang and Yokota, Rio
- Abstract
Hierarchical low-rank approximation of dense matrices can reduce the complexity of their factorization from O (N 3) to O (N) . However, the complex structure of such hierarchical matrices makes them difficult to parallelize. The block size and ranks can vary between the sub-blocks, which creates load imbalance. The dependency between the sub-blocks during factorization results in serialization. Since many sub-blocks are low-rank, their small computational load exposes the overhead of runtime systems. The combination of these factors makes it challenging to implement these methods on GPUs. In this work, we show that dense matrices can be factorized with linear complexity, while extracting the potential parallelism of GPUs. This is made possible through the H 2 -ULV factorization, which removes the dependency on trailing sub-matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Review of Recent Advances in Gaussian Process Regression Methods
- Author
-
Lyu, Chenyi, Liu, Xingchi, Mihaylova, Lyudmila, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Panoutsos, George, editor, Mahfouf, Mahdi, editor, and Mihaylova, Lyudmila S, editor
- Published
- 2024
- Full Text
- View/download PDF
16. Modal Analysis of a Flow Past a Cylinder
- Author
-
Thirunavukkarasu, Arvind, Sundar, Rahul, Sarkar, Sunetra, Chaari, Fakher, Series Editor, Gherardini, Francesco, Series Editor, Ivanov, Vitalii, Series Editor, Haddar, Mohamed, Series Editor, Cavas-Martínez, Francisco, Editorial Board Member, di Mare, Francesca, Editorial Board Member, Kwon, Young W., Editorial Board Member, Trojanowska, Justyna, Editorial Board Member, Xu, Jinyang, Editorial Board Member, Singh, Krishna Mohan, editor, Dutta, Sushanta, editor, Subudhi, Sudhakar, editor, and Singh, Nikhil Kumar, editor
- Published
- 2024
- Full Text
- View/download PDF
17. Approximation in the extended functional tensor train format.
- Author
-
Strössner, Christoph, Sun, Bonan, and Kressner, Daniel
- Abstract
This work proposes the extended functional tensor train (EFTT) format for compressing and working with multivariate functions on tensor product domains. Our compression algorithm combines tensorized Chebyshev interpolation with a low-rank approximation algorithm that is entirely based on function evaluations. Compared to existing methods based on the functional tensor train format, the adaptivity of our approach often results in reducing the required storage, sometimes considerably, while achieving the same accuracy. In particular, we reduce the number of function evaluations required to achieve a prescribed accuracy by up to over 96 % compared to the algorithm from Gorodetsky et al. (Comput. Methods Appl. Mech. Eng. 347, 59–84 2019). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Kalman Filter Using a Third-Order Tensorial Decomposition of the Impulse Response.
- Author
-
Dogariu, Laura-Maria, Paleologu, Constantin, Benesty, Jacob, and Albu, Felix
- Subjects
IMPULSE response ,KRONECKER products ,DECOMPOSITION method ,SYSTEM identification ,COMPUTATIONAL complexity - Abstract
For system identification problems associated with long-length impulse responses, the recently developed decomposition-based technique that relies on a third-order tensor (TOT) framework represents a reliable choice. It is based on a combination of three shorter filters, which merge their estimates in tandem with the Kronecker product. In this way, the global impulse response is modeled in a more efficient manner, with a significantly reduced parameter space (i.e., fewer coefficients). In this paper, we further develop a Kalman filter based on the TOT decomposition method. As compared to the recently designed recursive least-squares (RLS) counterpart, the proposed Kalman filter achieves superior performance in terms of the main criteria (e.g., tracking and accuracy). In addition, it significantly outperforms the conventional Kalman filter, while also having a lower computational complexity. Simulation results obtained in the context of echo cancellation support the theoretical framework and the related advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Iterative Algorithm using Decoupling Method for third-order Tensor Deblurring.
- Author
-
EL QATE, KARIMA, MOHAOUI, SOUAD, HAKIM, ABDELILAH, and RAGHAY, SAID
- Subjects
MATHEMATICAL decoupling ,ALGORITHMS - Abstract
The present paper is concerned with exploiting an iterative decoupling algorithm to address the problem of third-order tensor deblurring. The regularized deblurring problem, which is mathematically given by the sum of a fidelity term and a regularization term, is decoupled into an observation fidelity and a denoiser model steps. One basic advantage of the iterative decoupling algorithm is that the deblurring problem is supervised by the efficiency of the denoiser model. Thus, we consider a patch-based weighted low-rank tensor with sparsity prior. Numerical tests to image deblurring are given to demonstrate the efficiency of the proposed decoupling based algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Efficient Stochastic Generators with Spherical Harmonic Transformation for High-Resolution Global Climate Simulations from CESM2-LENS2.
- Author
-
Song, Yan, Khalid, Zubair, and Genton, Marc G.
- Abstract
AbstractEarth system models (ESMs) are fundamental for understanding Earth’s complex climate system. However, the computational demands and storage requirements of ESM simulations limit their utility. For the newly published CESM2-LENS2 data, which suffer from this issue, we propose a novel stochastic generator (SG) as a practical complement to the CESM2, capable of rapidly producing emulations closely mirroring training simulations. Our SG leverages the spherical harmonic transformation (SHT) to shift from spatial to spectral domains, enabling efficient low-rank approximations that significantly reduce computational and storage costs. By accounting for axial symmetry and retaining distinct ranks for land and ocean regions, our SG captures intricate nonstationary spatial dependencies. Additionally, a modified Tukey g-and-h (TGH) transformation accommodates non-Gaussianity in high-temporal-resolution data. We apply the proposed SG to generate emulations for surface temperature simulations from the CESM2-LENS2 data across various scales, marking the first attempt of reproducing daily data. These emulations are then meticulously validated against training simulations. This work offers a promising complementary pathway for efficient climate modeling and analysis while overcoming computational and storage limitations. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. On the distance to low-rank matrices in the maximum norm.
- Author
-
Budzinskiy, Stanislav
- Subjects
- *
LOW-rank matrices , *MATRIX norms , *APPROXIMATION error - Abstract
Every sufficiently big matrix with small spectral norm has a nearby low-rank matrix if the distance is measured in the maximum norm (Udell and Townsend, 2019 [7]). We use the Hanson–Wright inequality to improve the estimate of the distance for matrices with incoherent column and row spaces. In numerical experiments with several classes of matrices we study how well the theoretical upper bound describes the approximation errors achieved with the method of alternating projections. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. A Low-Rank Solver for Parameter Estimation and Uncertainty Quantification in Time-Dependent Systems of Partial Differential Equations.
- Author
-
Riffaud, Sébastien, Fernández, Miguel A., and Lombardi, Damiano
- Abstract
In this work we propose a low-rank solver in view of performing parameter estimation and uncertainty quantification in systems of partial differential equations. The solution approximation is sought in a space-parameter separated form. The discretisation in the parameter direction is made evolve in time through a Markov Chain Monte Carlo method. The resulting method is a Bayesian sequential estimation of the parameters. The computational burden is mitigated by the introduction of an efficient interpolator, based on a reduced basis built by exploiting the low-rank solves. The method is tested on four different applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Constructing low-rank Tucker tensor approximations using generalized completion.
- Author
-
Petrov, Sergey
- Subjects
- *
RESTRICTED isometry property , *APPROXIMATION algorithms - Abstract
The projected gradient method for matrix completion is generalized towards the higher-dimensional case of low-rank Tucker tensors. It is shown that an operation order rearrangement in the common projected gradient approach provides a complexity improvement. An even better algorithm complexity can be obtained by replacing the completion operator by a general operator that satisfies restricted isometry property; however, such a replacement transforms the completion algorithm into an approximation algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. PRECONDITIONER DESIGN VIA BREGMAN DIVERGENCES.
- Author
-
BOCK, ANDREAS A. and ANDERSEN, MARTIN S.
- Subjects
- *
DIVERGENCE theorem , *SINGULAR value decomposition , *LOW-rank matrices , *POSITIVE systems , *HERMITIAN forms , *EIGENVALUES - Abstract
We study a preconditioner for a Hermitian positive definite linear system, which is obtained as the solution of a matrix nearness problem based on the Bregman log determinant divergence. The preconditioner is of the form of a Hermitian positive definite matrix plus a low-rank matrix. For this choice of structure, the generalized eigenvalues of the preconditioned matrix are easily calculated, and we show under which conditions the preconditioner minimizes the l2 condition number of the preconditioned matrix. We develop practical numerical approximations of the preconditioner based on the randomized singular value decomposition (SVD) and the Nyström approximation and provide corresponding approximation results. Furthermore, we prove that the Nyström approximation is in fact also a matrix approximation in a range-restricted Bregman divergence and establish several connections between this divergence and matrix nearness problems in different measures. Numerical examples are provided to support the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Quantifying Measurement-Induced Disturbance to Distinguish Correlations as Classical or Quantum.
- Author
-
Shu, Yu-Chen, Lu, Bing-Ze, Chen, Kui-Yo, and Lin, Matthew M.
- Abstract
In contrast to the conventional entanglement-separability paradigm in quantum information theory, we embark on a different path by introducing a classical dichotomy. We aim to quantify a measurement’s disturbance by minimizing the difference between input and post-measurement states to distinguish either classical or quantum correlations. Theoretically, we apply a complex-valued gradient flow over Stiefel manifolds for minimization. Our focus extends beyond the practical application to encompass the well-known Łojasiewicz gradient inequality. This inequality is a fundamental tool that guarantees the global convergence of the flow from any initial starting point to the optimal solution. Numerically, we validate the effectiveness and robustness of our proposed method by performing a series of experiments in different scenarios. Experimental results suggest the capability of our approach to accurately and reliably characterize correlations as classical or quantum. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Kernel embedding of measures and low-rank approximation of integral operators.
- Author
-
Gauthier, Bertrand
- Abstract
We describe a natural coisometry from the Hilbert space of all Hilbert-Schmidt operators on a separable reproducing kernel Hilbert space (RKHS) H and onto the RKHS G associated with the squared-modulus of the reproducing kernel of H . Through this coisometry, trace-class integral operators defined by general measures and the reproducing kernel of H are isometrically represented as potentials in G , and the quadrature approximation of these operators is equivalent to the approximation of integral functionals on G . We then discuss the extent to which the approximation of potentials in RKHSs with squared-modulus kernels can be regarded as a differentiable surrogate for the characterisation of low-rank approximation of integral operators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Data Dissemination Framework Using Low-Rank Approximation in Edge Networks
- Author
-
Jungmin Kwon and Hyunggon Park
- Subjects
Data dissemination ,edge computing ,edge network ,network coding ,low-rank approximation ,decoding accuracy ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper proposes a reliable data dissemination framework for edge networks, leveraging network coding combined with low-rank approximation. We consider an edge network that consists of a server and power-limited mobile devices, where the data is broadcasted by the server. In such networks, broadcasted data may be lost due to poor channel conditions or the interference caused by the mobility of edge mobile devices, particularly without a retransmission mechanism. This can cause application errors in edge devices, lower the Quality of Service (QoS), and compromise network stability. To overcome these challenges, we propose a framework for reliable edge networks in broadcasting without retransmissions. The edge network reliability can be achieved by the approximate decoding of broadcasted data. In the proposed framework, the edge server employs matrix factorization to encode data with principal components, ensuring a lower decoding error rate even with potential packet losses. Furthermore, the proposed framework can shift the computational complexity from mobile edge devices to the edge server using the low-rank approximation at the decoding stage, effectively mitigating power limitations on mobile devices. Through theoretical analysis, we demonstrate that the proposed algorithm outperforms typical broadcasting in terms of decoding accuracy, and present an upper bound error rate for the proposed algorithm. The simulation results confirm that the proposed algorithm outperforms other state-of-the-art algorithms in terms of decoding accuracy, time delay, and complexity.
- Published
- 2024
- Full Text
- View/download PDF
28. Improving Brain Metabolite Detection with a Combined Low-Rank Approximation and Denoising Diffusion Probabilistic Model Approach
- Author
-
Yeong-Jae Jeon, Kyung Min Nam, Shin-Eui Park, and Hyeon-Man Baek
- Subjects
1H MRS ,denoising ,functional MRS ,low-rank approximation ,CSVD ,pain ,Technology ,Biology (General) ,QH301-705.5 - Abstract
In vivo proton magnetic resonance spectroscopy (MRS) is a noninvasive technique for monitoring brain metabolites. However, it is challenged by a low signal-to-noise ratio (SNR), often necessitating extended scan times to compensate. One of the conventional techniques for noise reduction is signal averaging, which is inherently time-consuming and can lead to participant discomfort, thus posing limitations in clinical settings. This study aimed to develop a hybrid denoising strategy that integrates low-rank approximation and denoising diffusion probabilistic model (DDPM) to enhance MRS data quality and shorten scan times. Using publicly available 1H MRS datasets from 15 subjects, we applied the Casorati SVD and DDPM to obtain baseline and functional data during a pain stimulation task. This method significantly improved SNR, resulting in outcomes comparable to or better than averaging over 32 signals. It also provided the most consistent metabolite measurements and adequately tracked temporal changes in glutamate levels, correlating with pain intensity ratings after heating. These findings demonstrate that our approach enhances MRS data quality, offering a more efficient alternative to conventional methods and expanding the potential for the real-time monitoring of neurochemical changes. This contribution has the potential to advance MRS techniques by integrating advanced denoising methods to increase the acquisition speed and enhance the precision of brain metabolite analyses.
- Published
- 2024
- Full Text
- View/download PDF
29. A Fourth-Order Tensorial Wiener Filter Using the Conjugate Gradient Method
- Author
-
Laura-Maria Dogariu, Ruxandra-Liana Costea, Constantin Paleologu, and Jacob Benesty
- Subjects
linear system identification ,Wiener filter ,conjugate gradient ,tensor decomposition ,nearest Kronecker product ,low-rank approximation ,Mathematics ,QA1-939 - Abstract
The recently developed iterative Wiener filter using a fourth-order tensorial (FOT) decomposition owns appealing performance in the identification of long length impulse responses. It relies on the nearest Kronecker product representation (with particular intrinsic symmetry features), together with low-rank approximations. Nevertheless, this new iterative filter requires matrix inversion operations when solving the Wiener–Hopf equations associated with the component filters. In this communication, we propose a computationally efficient version that relies on the conjugate gradient (CG) method for solving these sets of equations. The proposed solution involves a specific initialization of the component filters and sequential connections between the CG cycles. Different FOT-based decomposition setups are also analyzed from the point of view of the resulting parameter space. Experimental results obtained in the context of echo cancellation confirm the good behavior of the proposed approach and its superiority in comparison to the conventional Wiener filter and other decomposition-based versions.
- Published
- 2024
- Full Text
- View/download PDF
30. Efficient quaternion CUR method for low-rank approximation to quaternion matrix
- Author
-
Wu, Pengling, Kou, Kit Ian, Cai, Hongmin, and Yu, Zhaoyuan
- Published
- 2024
- Full Text
- View/download PDF
31. Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition.
- Author
-
Benesty, Jacob, Paleologu, Constantin, Stanciu, Cristian-Lucian, Costea, Ruxandra-Liana, Dogariu, Laura-Maria, and Ciochină, Silviu
- Subjects
CONJUGATE gradient methods ,MATRIX inversion ,SYSTEM identification ,IMPULSE response ,DATA structures ,LINEAR systems - Abstract
In linear system identification problems, the Wiener filter represents a popular tool and stands as an important benchmark. Nevertheless, it faces significant challenges when identifying long-length impulse responses. In order to address the related shortcomings, the solution presented in this paper is based on a third-order tensor decomposition technique, while the resulting sets of Wiener–Hopf equations are solved with the conjugate gradient (CG) method. Due to the decomposition-based approach, the number of coefficients (i.e., the parameter space of the filter) is greatly reduced, which results in operating with smaller data structures within the algorithm. As a result, improved robustness and accuracy can be achieved, especially in harsh scenarios (e.g., limited/incomplete sets of data and/or noisy conditions). Besides, the CG-based solution avoids matrix inversion operations, together with the related numerical and complexity issues. The simulation results are obtained in a network echo cancellation scenario and support the performance gain. In this context, the proposed iterative Wiener filter outperforms the conventional benchmark and also some previously developed counterparts that use matrix inversion or second-order tensor decompositions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Efficient Blind Hyperspectral Unmixing Framework Based on CUR Decomposition (CUR-HU).
- Author
-
Abdelgawad, Muhammad A. A., Cheung, Ray C. C., and Yan, Hong
- Subjects
- *
BLIND source separation , *MATRIX decomposition , *NONNEGATIVE matrices , *PIXELS , *DECOMPOSITION method , *REMOTE sensing - Abstract
Hyperspectral imaging captures detailed spectral data for remote sensing. However, due to the limited spatial resolution of hyperspectral sensors, each pixel of a hyperspectral image (HSI) may contain information from multiple materials. Although the hyperspectral unmixing (HU) process involves estimating endmembers, identifying pure spectral components, and estimating pixel abundances, existing algorithms mostly focus on just one or two tasks. Blind source separation (BSS) based on nonnegative matrix factorization (NMF) algorithms identify endmembers and their abundances at each pixel of HSI simultaneously. Although they perform well, the factorization results are unstable, require high computational costs, and are difficult to interpret from the original HSI. CUR matrix decomposition selects specific columns and rows from a dataset to represent it as a product of three small submatrices, resulting in interpretable low-rank factorization. In this paper, we propose a new blind HU framework based on CUR factorization called CUR-HU that performs the entire HU process by exploiting the low-rank structure of given HSIs. CUR-HU incorporates several techniques to perform the HU process with a performance comparable to state-of-the-art methods but with higher computational efficiency. We adopt a deterministic sampling method to select the most informative pixels and spectrum components in HSIs. We use an incremental QR decomposition method to reduce computation complexity and estimate the number of endmembers. Various experiments on synthetic and real HSIs are conducted to evaluate the performance of CUR-HU. CUR-HU performs comparably to state-of-the-art methods for estimating the number of endmembers and abundance maps, but it outperforms other methods for estimating the endmembers and the computational efficiency. It has a 9.4 to 249.5 times speedup over different methods for different real HSIs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. MAKING THE NYSTRÖM METHOD HIGHLY ACCURATE FOR LOW-RANK APPROXIMATIONS.
- Author
-
JIANLIN XIA
- Subjects
- *
SEMIDEFINITE programming , *SCHUR complement , *SET functions , *KERNEL functions , *SINGULAR value decomposition - Abstract
The Nyström method is a convenient strategy to obtain low-rank approximations to kernel matrices in nearly linear complexity. Existing studies typically use the method to approximate positive semidefinite matrices with low or modest accuracies. In this work, we propose a series of heuristic strategies to make the Nyström method reach high accuracies for nonsymmetric and/or rectangular matrices. The resulting methods (called high-accuracy Nyström methods) treat the Nyström method and a skinny rank-revealing factorization as a fast pivoting strategy in a progressive alternating direction refinement process. Two refinement mechanisms are used: alternating the row and column pivoting starting from a small set of randomly chosen columns, and adaptively increasing the number of samples until a desired rank or accuracy is reached. A fast subset update strategy based on the progressive sampling of Schur complements is further proposed to accelerate the refinement process. Efficient randomized accuracy control is also provided. Relevant accuracy and singular value analysis is given to support some of the heuristics. Extensive tests with various kernel functions and data sets show how the methods can quickly reach prespecified high accuracies in practice, sometimes with quality close to SVDs, using only small numbers of progressive sampling steps. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A RESTRICTED SVD TYPE CUR DECOMPOSITION FOR MATRIX TRIPLETS.
- Author
-
GIDISU, PERFECT Y. and HOCHSTENBACH, MICHIEL E.
- Subjects
- *
MATRIX decomposition , *APPROXIMATION error , *DATA reduction , *FACTORIZATION , *SINGULAR value decomposition , *SUBSET selection , *MATRICES (Mathematics) - Abstract
We present a new restricted SVD-based CUR (RSVD-CUR) factorization for matrix triplets (A, B, G) that aims to extract meaningful information by providing a low-rank approximation of the three matrices using a subset of their rows and columns. The proposed method utilizes the discrete empirical interpolation method (DEIM) to select the subset of rows and columns from the orthogonal and nonsingular matrices obtained through an RSVD of the matrix triplet. We explore the relationships between a DEIM type RSVD-CUR factorization, a DEIM type CUR factorization, and a DEIM type generalized CUR decomposition and provide an error analysis that establishes the accuracy of the RSVD-CUR decomposition within a factor of the approximation error of the RSVD of the given matrices. The RSVD-CUR factorization can be used in applications that require approximating one data matrix relative to two other given matrices. We discuss two such applications, namely multiview dimension reduction and data perturbation problems where a correlated noise matrix is added to the input data matrix. Our numerical experiments demonstrate the advantages of the proposed method over the standard CUR approximation in these scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. PNSP: Overcoming catastrophic forgetting using Primary Null Space Projection in continual learning.
- Author
-
Zhou, DaiLiang and Song, YongHong
- Subjects
- *
LOW-rank matrices , *APPROXIMATION algorithms , *COVARIANCE matrices , *NETWORK analysis (Planning) , *COGNITIVE computing - Abstract
Continual Learning (CL) plays a crucial role in enhancing learning performance for both new and previous tasks in continuous data streams, thus contributing to the advancement of cognitive computing. However, CL faces a fundamental challenge known as the stability-plasticity quandary. In this research, we present an innovative and effective CL algorithm called Primary Null Space Projection (PNSP) to strike a balance between network plasticity and stability. PNSP consists of three main components. Firstly, it leverages the NSP-LRA algorithm to project the gradient of network parameters from previous tasks into a meticulously designed null space. NSP-LRA harnesses high-dimensional geometric information extracted from the feature covariance matrix through low-rank approximation algorithm to obtain the basis of null space dynamically. This process constructs an innovation null space and ensures the continuous updating of orthonormal bases to accommodate changes in the input data. Secondly, we propose a Consistency-guided Task-specific Feature Learning (CTFL) mechanism to tackle the issue of catastrophic forgetting and facilitate continual learning. CTFL achieves this by aligning feature vectors and maintaining consistent feature learning directions, thereby preventing the loss of previously acquired knowledge. Lastly, we introduce Label Guided Self-Distillation (LGSD), a technique that utilizes true labels to guide the distillation process and incorporates a dynamic temperature mechanism to enhance performance. To evaluate the effectiveness of our proposed method, we conduct experiments on the CIFAR100 and TinyImageNet datasets. The results demonstrate significant improvements over state-of-the-art methods. We have made the implementation code of our approach available for reference. • Proposal of PNSP method: Addresses stability-plasticity dilemma in continual learning. • NSP-LRA: Dynamically constructs null space for minimizing interference. • CTFL mechanism: Ensures consistent feature learning directions. • LGSD technique: Incorporates true labels and dynamic temperature for enhanced distillation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Low-rank approximation-based bidirectional linear discriminant analysis for image data.
- Author
-
Chen, Xiuhong and Chen, Tong
- Subjects
FISHER discriminant analysis ,IMAGE analysis ,LOW-rank matrices ,OPTIMIZATION algorithms ,DATA analysis - Abstract
Dimensionality reduction methods for images directly without matrix-to-vector conversion have been widely concerned and achieved good classification results, especially for face recognition problem. However, the existing methods work in the column or/and row direction of images respectively, then project the given image onto the obtained left and right projective matrices to obtain the corresponding feature matrix. On the other hand, the low-rank approximation of matrix (LRAM) is a low-rank representation, which can approximate the original matrix as much as possible through a low-rank matrix and preserve the features in the original data. In this paper, we propose a low-rank approximation-based two-directional linear discriminant analysis method for image data, in which the left and right projection matrices are extracted simultaneously from the image data directly and the low-rank transformed feature matrices of training images are obtained by low-rank approximation of matrix. By minimizing the total reconstruction error and the within-class scatter of the transformed feature matrices and maximizing the between-class scatter of the transformed feature matrices, the proposed method can avoid huge feature matrix problem in some existing methods, and make full use of the discriminant and descriptive information of the images. An efficient iterative optimization algorithm is also devised to solve the proposed model. Experiment results on several datasets demonstrate the effectiveness and superiority of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Pass-efficient truncated UTV for low-rank approximations.
- Author
-
Ji, Ying, Feng, Yuehua, and Dong, Yongxin
- Abstract
We propose the pass-efficient truncated UTV algorithm, a faster variant of the TUXV algorithm for low-rank approximations. Compared with the TUXV algorithm, data transfer and the complexity of the proposed algorithm are reduced. Therefore, our algorithm is suitable for large matrices stored out of memory or generated by streaming data. We also develop residual error upper bounds and singular value approximation error bounds for the pass-efficient truncated UTV algorithm. Numerical experiments are reported to demonstrate the efficiency and effectiveness of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. UTV decomposition of dual matrices and its applications.
- Author
-
Xu, Renjie, Wei, Tong, Wei, Yimin, and Yan, Hong
- Subjects
COGNITIVE neuroscience ,MATRIX decomposition ,FUNCTIONAL magnetic resonance imaging ,SYLVESTER matrix equations ,FACTORIZATION - Abstract
Matrix factorization in the context of dual numbers has found applications, in recent years, in fields such as kinematics and computer graphics. In this paper, we develop an efficient approach for handling large-scale data low-rank approximation problems using the UTV decomposition of dual matrices (DUTV). Theoretically, we propose an explicit expression for the DUTV and provide necessary and sufficient conditions for its existence. During this process, we also discovered that the general low-rank model for dual matrices can be solved by the Sylvester equation. In numerical experiments, the DUTV algorithm outperforms the dual matrix SVD algorithm in terms of speed and maintains effective performance in wave recognition. Subsequently, we utilize the DUTV algorithm to validate brain functional circuits in large-scale task-state functional magnetic resonance imaging data. Successfully identifying three types of wave signals, the DUTV method provides substantial empirical evidence for cognitive neuroscience theories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Hyperspectral Image Denoising Based on Principal-Third-Order-Moment Analysis.
- Author
-
Li, Shouzhi, Geng, Xiurui, Zhu, Liangliang, Ji, Luyan, and Zhao, Yongchao
- Subjects
- *
IMAGE denoising , *IMAGE analysis , *NOISE - Abstract
Denoising serves as a critical preprocessing step for the subsequent analysis of the hyperspectral image (HSI). Due to their high computational efficiency, low-rank-based denoising methods that project the noisy HSI into a low-dimensional subspace identified by certain criteria have gained widespread use. However, methods employing second-order statistics as criteria often struggle to retain the signal of the small targets in the denoising results. Other methods utilizing high-order statistics encounter difficulties in effectively suppressing noise. To tackle these challenges, we delve into a novel criterion to determine the projection subspace, and propose an innovative low-rank-based method that successfully preserves the spectral characteristic of small targets while significantly reducing noise. The experimental results on the synthetic and real datasets demonstrate the effectiveness of the proposed method, in terms of both small-target preservation and noise reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Optimizing network robustness via Krylov subspaces.
- Author
-
Massei, Stefano and Tudisco, Francesco
- Subjects
- *
KRYLOV subspace , *INTERIOR-point methods , *MATRIX functions - Abstract
We consider the problem of attaining either the maximal increase or reduction of the robustness of a complex network by means of a bounded modification of a subset of the edge weights. We propose two novel strategies combining Krylov subspace approximations with a greedy scheme and an interior point method employing either the Hessian or its approximation computed via the limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS). The paper discusses the computational and modeling aspects of our methodology and illustrates the various optimization problems on networks that can be addressed within the proposed framework. Finally, in the numerical experiments we compare the performances of our algorithms with state-of-the-art techniques on synthetic and real-world networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. EFFICIENT ERROR AND VARIANCE ESTIMATION FOR RANDOMIZED MATRIX COMPUTATIONS.
- Author
-
EPPERLY, ETHAN N. and TROPP, JOEL A.
- Subjects
- *
APPROXIMATION algorithms , *SCIENTIFIC computing , *RAPID diagnostic tests , *MATRICES (Mathematics) , *POCKETKNIVES , *MACHINE learning - Abstract
Randomized matrix algorithms have become workhorse tools in scientific computing and machine learning. To use these algorithms safely in applications, they should be coupled with posterior error estimates to assess the quality of the output. To meet this need, this paper proposes two diagnostics: a leave-one-out error estimator for randomized low-rank approximations and a jackknife resampling method to estimate the variance of the output of a randomized matrix computation. Both of these diagnostics are rapid to compute for randomized low-rank approximation algorithms such as the randomized SVD and randomized Nystr\"om approximation, and they provide useful information that can be used to assess the quality of the computed output and guide algorithmic parameter choices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. SINGULAR VALUE DECOMPOSITION OF DUAL MATRICES AND ITS APPLICATION TO TRAVELING WAVE IDENTIFICATION IN THE BRAIN.
- Author
-
TONG WEI, WEIYANG DING, and YIMIN WEI
- Subjects
- *
SINGULAR value decomposition , *COMPLEX matrices , *MATRIX decomposition , *BRAIN waves , *FUNCTIONAL magnetic resonance imaging , *NUMBER systems , *VIDEO monitors - Abstract
Matrix factorizations in dual number algebra, a hypercomplex number system, have been applied to kinematics, spatial mechanisms, and other fields recently. We develop an approach to identify spatiotemporal patterns in the brain such as traveling waves using the singular value decomposition (SVD) of dual matrices in this paper. Theoretically, we propose the compact dual singular value decomposition (CDSVD) of dual complex matrices with explicit expressions as well as a necessary and sufficient condition for its existence. Furthermore, based on the CDSVD, we report on the optimal solution to the best rank-k approximation under a newly defined quasi-metric in the dual complex number system. The CDSVD is also related to the dual Moore--Penrose generalized inverse. Numerically, comparisons with other available algorithms are conducted, which indicate less computational costs of our proposed CDSVD. In addition, the infinitesimal part of the CDSVD can identify the true rank of the original matrix from the noise-added matrix, but the classical SVD cannot. Next, we employ experiments on simulated time-series data and a road monitoring video to demonstrate the beneficial effect of the infinitesimal parts of dual matrices in spatiotemporal pattern identification. Finally, we apply this approach to the large-scale brain functional magnetic resonance imaging data, identify three kinds of traveling waves, and further validate the consistency between our analytical results and the current knowledge of cerebral cortex function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. XTRACE: MAKING THE MOST OF EVERY SAMPLE IN STOCHASTIC TRACE ESTIMATION.
- Author
-
EPPERLY, ETHAN N., TROPP, JOEL A., and WEBBER, ROBERT J.
- Subjects
- *
BUDGET , *ALGORITHMS - Abstract
The implicit trace estimation problem asks for an approximation of the trace of a square matrix, accessed via matrix-vector products (matvecs). This paper designs new randomized algorithms, XTrace and XNysTrace, for the trace estimation problem by exploiting both variance reduction and the exchangeability principle. For a fixed budget of matvecs, numerical experiments show that the new methods can achieve errors that are orders of magnitude smaller than existing algorithms, such as the Girard--Hutchinson estimator or the Hutch++ estimator. A theoretical analysis confirms the benefits by offering a precise description of the performance of these algorithms as a function of the spectrum of the input matrix. The paper also develops an exchangeable estimator, XDiag, for approximating the diagonal of a square matrix using matvecs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Low-Rank Approximation of Matrices for PMI-Based Word Embeddings
- Author
-
Sorokina, Alena, Karipbayeva, Aidana, Assylbekov, Zhenisbek, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Gelbukh, Alexander, editor
- Published
- 2023
- Full Text
- View/download PDF
45. Data‐driven linear complexity low‐rank approximation of general kernel matrices: A geometric approach.
- Author
-
Cai, Difeng, Chow, Edmond, and Xi, Yuanzhe
- Subjects
- *
GEOMETRIC approach , *KRIGING , *MATRICES (Mathematics) , *POINT set theory - Abstract
Summary: A general, rectangular kernel matrix may be defined as Kij=κ(xi,yj)$$ {K}_{ij}=\kappa \left({x}_i,{y}_j\right) $$ where κ(x,y)$$ \kappa \left(x,y\right) $$ is a kernel function and where X={xi}i=1m$$ X={\left\{{x}_i\right\}}_{i=1}^m $$ and Y={yi}i=1n$$ Y={\left\{{y}_i\right\}}_{i=1}^n $$ are two sets of points. In this paper, we seek a low‐rank approximation to a kernel matrix where the sets of points X$$ X $$ and Y$$ Y $$ are large and are arbitrarily distributed, such as away from each other, "intermingled", identical, and so forth. Such rectangular kernel matrices may arise, for example, in Gaussian process regression where X$$ X $$ corresponds to the training data and Y$$ Y $$ corresponds to the test data. In this case, the points are often high‐dimensional. Since the point sets are large, we must exploit the fact that the matrix arises from a kernel function, and avoid forming the matrix, and thus ruling out most algebraic techniques. In particular, we seek methods that can scale linearly or nearly linearly with respect to the size of data for a fixed approximation rank. The main idea in this paper is to geometrically select appropriate subsets of points to construct a low rank approximation. An analysis in this paper guides how this selection should be performed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Low-rank flat-field correction for artifact reduction in spectral computed tomography.
- Author
-
Bangsgaard, Katrine Ottesen, Burca, Genoveva, Ametova, Evelina, Andersen, Martin Skovgaard, and Jørgensen, Jakob Sauer
- Subjects
- *
COMPUTED tomography , *LOW-rank matrices , *SPECTRAL imaging , *REAR-screen projection , *SIGNAL-to-noise ratio , *DUAL energy CT (Tomography) - Abstract
Spectral computed tomography has received considerable interest in recent years since spectral measurements contain much richer information about the object of interest. In spectral computed tomography, we are interested in the energy channel-wise reconstructions of the object. However, such reconstructions suffer from a low signal-to-noise ratio and share the challenges of conventional low-dose computed tomography such as ring artifacts. Ring artifacts arise from errors in the flat fields and can significantly degrade the quality of the reconstruction. We propose an extended flat-field model that exploits high correlation in the spectral flat fields to reduce ring artifacts in channel-wise reconstructions. The extended model relies on the assumption that the spectral flat fields can be well-approximated by a low-rank matrix. Our proposed model works directly on the spectral flat fields and can be combined with any existing reconstruction model, e.g. filtered back projection and iterative methods. The proposed model is validated on a neutron data set. The results show that our method successfully diminishes ring artifacts and improves the quality of the reconstructions. Moreover, the results indicate that our method is robust; it only needs a single spectral flat-field image, whereas existing methods need multiple spectral flat-field images to reach a similar level of ring reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. QR decomposition based low rank approximation for Gaussian process regression.
- Author
-
Thomas, Emil and Sarin, Vivek
- Subjects
KRIGING ,LOW-rank matrices ,APPROXIMATION algorithms ,MATRIX decomposition - Abstract
This paper presents a QR decomposition based low-rank approximation algorithm for training and prediction in Gaussian process regression. Conventional low-rank methods like FITC, Nyström, etc., rely on low-rank approximations to the full kernel matrix that are derived from a set of representative data points. Training with FITC involves the selection of these representative points and is computationally intensive when the dimension of the dataset is large. Prediction accuracy of Nyström suffers when the number of representative points is small or when the length scale is small. The algorithm proposed here uses the thin QR decomposition of the low-rank matrix used in the Nyström approximation. We show that the marginal likelihood and its derivatives needed for training can be split into the subspace belonging to that approximation and its orthogonal complement. During prediction, we further restrict the target vectors into this subspace and thus eliminate the numerical error caused by very low noise variance. Use of the QR factorization improves the training and prediction performance. Experiments on real and synthetic data show that our approach is more accurate and faster than conventional methods. Our algorithm performs well at low length scales and achieves the prediction accuracy comparable to the full kernel matrix approach even for low rank approximations, without the need to optimize the representative points. This results in a simpler and faster approximation that could be used to scale Gaussian process regression to large datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Lower Bounds for Column Matrix Approximations.
- Author
-
Osinsky, A.
- Subjects
- *
PSEUDOINVERSES , *SUBSET selection - Abstract
We show a connection between lower and upper bounds on the column matrix approximation accuracy and the bounds on the norms of the pseudoinverses of the submatrices of orthogonal matrices. This connection is exploited to derive lower bounds for column approximations accuracy in spectral and Frobenius norms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Numerical methods for solving large-scale systems of differential equations.
- Author
-
Sadek, Lakhlifa and Talibi Alaoui, Hamad
- Abstract
In this paper, we propose two new methods to solve large-scale systems of differential equations, which are based on the Krylov method. In the first one, the exact solution with the exponential projection technique of the matrix. In the second, we get a new problem of small size, by dropping the initial problem, and then we solve it in ways, such as the Rosenbrock and the BDF. Some theoretical results are presented such as an accurate expression of the remaining criteria. We give an expression of error report and numerical values to compare the two methods in terms of how long each method takes, and we also compare the approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Resonance Calculation Method Based on Energy Spectrum Using Reduced Order Model
- Author
-
YU Jialei;ZHANG Qian;ZHANG Jinchao;ZHAO Qiang
- Subjects
resonance calculation ,effective section ,low-rank approximation ,reduced-order model ,Nuclear engineering. Atomic power ,TK9001-9401 ,Nuclear and particle physics. Atomic energy. Radioactivity ,QC770-798 - Abstract
Resonance calculation is one of the most important and difficult parts of core analysis and can dominate the accuracy of the core analysis. There are three methods for resonance computation: ultra-fine group (UFG) method, equivalence method, and subgroup method. Each of the approaches has advantages and disadvantages. UFG method is suitable for rigorous resonance calculation of various reactors, but the efficiency is very low for complex geometry and large-scale core. The equivalent theory has high computational efficiency and is widely used in core calculations. But the equivalence theory uses many assumptions and approximations, and its accuracy and application are limited. Subgroup theory can efficiently calculate resonance self-shield effects for arbitrary geometric problems, but the potential drawback is the limitation on the treatment of the resonance interference effect and the nonuniform temperature distribution. In conclusion, conventional methods have difficulties on balancing the accuracy and efficiency. In this paper, it is proposed to use generalized group condensation theory and reduced order model to extract the features of energy spectra in resonance calculation. Based on the ultrafine group energy spectra under 40 different background sections, the orthogonal bases are generated using these energy spectra. Using singular value decomposition and low rank approximation, the orthogonal basis functions related to energy spectrum characteristics are generated. Based on generalized group condensation theory, the neutron transport equation is solved by orthogonal basis. By solving the broad group angular flux expansion coefficient that considers the weight of orthogonal basis function distribution, the ultra-fine group energy spectrum is reconstructed. Finally, the effective cross-section is calculated by using the ultra-fine group energy condensation. According to the JAERI benchmark, UO2 fuel cells and MOX fuel cells with different enrichments are designed as problems for this calculation. Numerical results show that the maximum relative error of the reconstructed effective cross-section is below 2% for each problem. The impact on accuracy of the effective cross-section and keff from different orthogonal orders are analyzed. It is found that for fuel cells with different enrichment, the minimum orthogonal order required to achieve sufficient accuracy is different. With the increase of orders, the computation time increases linearly. It is found that when the cumulative contribution rate of singular value reaches 99.9%, the fine energy spectrum can be accurately described. These results show that the Resonance calculation method based on the Energy Spectrum Reduction model (RESR) can effectively predict the resonance self-screen cross-section, and its computational efficiency has certain advantages.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.