1. Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution Networks
- Author
-
Xie, Xi, Peng, Hongwu, Hasan, Amit, Huang, Shaoyi, Zhao, Jiahui, Fang, Haowen, Zhang, Wei, Geng, Tong, Khan, Omer, and Ding, Caiwen
- Subjects
Computer Science - Hardware Architecture ,Computer Science - Machine Learning ,I.2 ,B.6 ,C.3 - Abstract
Graph Convolutional Networks (GCNs) are pivotal in extracting latent information from graph data across various domains, yet their acceleration on mainstream GPUs is challenged by workload imbalance and memory access irregularity. To address these challenges, we present Accel-GCN, a GPU accelerator architecture for GCNs. The design of Accel-GCN encompasses: (i) a lightweight degree sorting stage to group nodes with similar degree; (ii) a block-level partition strategy that dynamically adjusts warp workload sizes, enhancing shared memory locality and workload balance, and reducing metadata overhead compared to designs like GNNAdvisor; (iii) a combined warp strategy that improves memory coalescing and computational parallelism in the column dimension of dense matrices. Utilizing these principles, we formulated a kernel for sparse matrix multiplication (SpMM) in GCNs that employs block-level partitioning and combined warp strategy. This approach augments performance and multi-level memory efficiency and optimizes memory bandwidth by exploiting memory coalescing and alignment. Evaluation of Accel-GCN across 18 benchmark graphs reveals that it outperforms cuSPARSE, GNNAdvisor, and graph-BLAST by factors of 1.17 times, 1.86 times, and 2.94 times respectively. The results underscore Accel-GCN as an effective solution for enhancing GCN computational efficiency., Comment: ICCAD 2023 accepted publication
- Published
- 2023