Back to Search Start Over

Partitioning Trillion Edge Graphs on Edge Devices

Authors :
Chhabra, Adil
Kurpicz, Florian
Schulz, Christian
Schweisgut, Dominik
Seemaier, Daniel
Publication Year :
2024

Abstract

Processing large-scale graphs, containing billions of entities, is critical across fields like bioinformatics, high-performance computing, navigation and route planning, among others. Efficient graph partitioning, which divides a graph into sub-graphs while minimizing inter-block edges, is essential to graph processing, as it optimizes parallel computing and enhances data locality. Traditional in-memory partitioners, such as METIS and KaHIP, offer high-quality partitions but are often infeasible for enormous graphs due to their substantial memory overhead. Streaming partitioners reduce memory usage to O(n), where 'n' is the number of nodes of the graph, by loading nodes sequentially and assigning them to blocks on-the-fly. This paper introduces StreamCPI, a novel framework that further reduces the memory overhead of streaming partitioners through run-length compression of block assignments. Notably, StreamCPI enables the partitioning of trillion-edge graphs on edge devices. Additionally, within this framework, we propose a modification to the LA-vector bit vector for append support, which can be used for online run-length compression in other streaming applications. Empirical results show that StreamCPI reduces memory usage while maintaining or improving partition quality. For instance, using StreamCPI, the Fennel partitioner effectively partitions a graph with 17 billion nodes and 1.03 trillion edges on a Raspberry Pi, achieving significantly better solution quality than Hashing, the only other feasible algorithm on edge devices. StreamCPI thus advances graph processing by enabling high-quality partitioning on low-cost machines.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.07732
Document Type :
Working Paper