Back to Search Start Over

Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness

Authors :
Guo, Kai
Liu, Zewen
Chen, Zhikai
Wen, Hongzhi
Jin, Wei
Tang, Jiliang
Chang, Yi
Publication Year :
2024

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language processing tasks. Recently, several LLMs-based pipelines have been developed to enhance learning on graphs with text attributes, showcasing promising performance. However, graphs are well-known to be susceptible to adversarial attacks and it remains unclear whether LLMs exhibit robustness in learning on graphs. To address this gap, our work aims to explore the potential of LLMs in the context of adversarial attacks on graphs. Specifically, we investigate the robustness against graph structural and textual perturbations in terms of two dimensions: LLMs-as-Enhancers and LLMs-as-Predictors. Through extensive experiments, we find that, compared to shallow models, both LLMs-as-Enhancers and LLMs-as-Predictors offer superior robustness against structural and textual attacks.Based on these findings, we carried out additional analyses to investigate the underlying causes. Furthermore, we have made our benchmark library openly available to facilitate quick and fair evaluations, and to encourage ongoing innovative research in this field.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.12068
Document Type :
Working Paper