Back to Search Start Over

Joint direct and transposed sparse matrix‐vector multiplication for multithreaded CPUs.

Authors :
Kozický, Claudio
Šimeček, Ivan
Source :
Concurrency & Computation: Practice & Experience; 7/10/2021, Vol. 33 Issue 13, p1-26, 26p
Publication Year :
2021

Abstract

Repeatedly performing sparse matrix‐vector multiplication (SpMV) followed by transposed sparse matrix‐vector multiplication (SpMTV) with the same matrix is a part of several algorithms, for example, the Lanczos biorthogonalization algorithm and the biconjugate gradient method. Such algorithms can benefit from combining parallel SpMV and SpMTV into a single operation we call joint direct and transposed sparse matrix‐vector multiplication (SpMMTV). In this article, we present a parallel SpMMTV algorithm for shared‐memory CPUs. The algorithm uses a sparse matrix format that divides the stored matrix into sparse matrix blocks and compresses the row and column indices of the matrix. This sparse matrix format can be also used for SpMV, SpMTV, and similar sparse matrix‐vector operations. We expand upon existing research by suggesting new variants of the parallel SpMMTV algorithm and by extending the algorithm to efficiently support symmetric matrices. We compare the performance of the presented parallel SpMMTV algorithm with alternative approaches, which use state‐of‐the‐art sparse matrix formats and libraries, using sparse matrices from real‐world applications. The performance results indicate that the median performance of our proposed parallel SpMMTV algorithm is up to 45% higher than of the alternative approaches. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15320626
Volume :
33
Issue :
13
Database :
Complementary Index
Journal :
Concurrency & Computation: Practice & Experience
Publication Type :
Academic Journal
Accession number :
150910182
Full Text :
https://doi.org/10.1002/cpe.6236