Back to Search Start Over

Evaluating and Improving Graph to Text Generation with Large Language Models

Authors :
He, Jie
Yang, Yijun
Long, Wanqiu
Xiong, Deyi
Basulto, Victor Gutierrez
Pan, Jeff Z.
Publication Year :
2025

Abstract

Large language models (LLMs) have demonstrated immense potential across various tasks. However, research for exploring and improving the capabilities of LLMs in interpreting graph structures remains limited. To address this gap, we conduct a comprehensive evaluation of prompting current open-source LLMs on graph-to-text generation tasks. Although we explored the optimal prompting strategies and proposed a novel and effective diversity-difficulty-based few-shot sample selection method, we found that the improvements from tuning-free approaches were incremental, as LLMs struggle with planning on complex graphs, particularly those with a larger number of triplets. To further improve LLMs in planning with graph sequences and grounding in truth, we introduce a new graph-to-text dataset, PlanGTG, annotated with two sub-tasks: reordering and attribution. Through extensive automatic and human evaluations, we demonstrate significant improvements in the quality of generated text from both few-shot learning and fine-tuning perspectives using the PlanGTG dataset. Our study paves the way for new research directions in graph-to-text generation. PlanGTG datasets can be found in https://github.com/probe2/kg_text.<br />Comment: NAACL 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.14497
Document Type :
Working Paper