Back to Search Start Over

Leveraging index compression techniques to optimize the use of co-processors

Authors :
Manuel Freire
Raul Marichal
Agustin Martinez
Daniel Padron
Ernesto Dufrechou
Pablo Ezzatti
Source :
Journal of Computer Science and Technology, Vol 24, Iss 1 (2024)
Publication Year :
2024
Publisher :
Postgraduate Office, School of Computer Science, Universidad Nacional de La Plata, 2024.

Abstract

The significant presence that many-core devices like GPUs have these days, and their enormous computational power, motivates the study of sparse matrix operations in this hardware. The essential sparse kernels in scientific computing, such as the sparse matrix-vector multiplication (SpMV), usually have many different high-performance GPU implementations. Sparse matrix problems typically imply memory-bound operations, and this characteristic is particularly limiting in massively parallel processors. This work revisits the main ideas about reducing the volume of data required by sparse storage formats and advances in understanding some compression techniques. In particular, we study the use of index compression combined with sparse matrix reordering techniques in CSR and explore other approaches using a blocked format. The systematic experimental evaluation on a large set of real-world matrices confirms that this approach achieves meaningful data storage reductions. Additionally, we find promising results of the impact of the storage reduction on the execution time when using accelerators to perform the mathematical kernels.

Details

Language :
English
ISSN :
16666038 and 16666046
Volume :
24
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Journal of Computer Science and Technology
Publication Type :
Academic Journal
Accession number :
edsdoj.347217610fd54d38b8e9ff415451e42c
Document Type :
article
Full Text :
https://doi.org/10.24215/16666038.24.e01