1. FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning
- Author
-
Luo, Renqiang, Huang, Huafei, Lee, Ivan, Xu, Chengpei, Qi, Jianzhong, and Xia, Feng
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models, particularly against subgroups defined by sensitive features. Additionally, GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs. Our experiments demonstrate that graph partitioning can enhance the fairness of GT models while reducing computational complexity. To understand this improvement, we conducted a theoretical investigation into the root causes of fairness issues in GT models. We found that the sensitive features of higher-order nodes disproportionately influence lower-order nodes, resulting in sensitive feature bias. We propose Fairness-aware scalable GT based on Graph Partitioning (FairGP), which partitions the graph to minimize the negative impact of higher-order nodes. By optimizing attention mechanisms, FairGP mitigates the bias introduced by global attention, thereby enhancing fairness. Extensive empirical evaluations on six real-world datasets validate the superior performance of FairGP in achieving fairness compared to state-of-the-art methods. The codes are available at https://github.com/LuoRenqiang/FairGP., Comment: 11 pages, 2 figures, Accepted at AAAI 2025
- Published
- 2024