Back to Search Start Over

Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting

Authors :
Guan, Xinyan
Liu, Yanjiang
Lin, Hongyu
Lu, Yaojie
He, Ben
Han, Xianpei
Sun, Le
Publication Year :
2023

Abstract

Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of large language models (LLMs). Existing methods usually only use the user's input to query the knowledge graph, thus failing to address the factual hallucination generated by LLMs during its reasoning process. To address this problem, this paper proposes Knowledge Graph-based Retrofitting (KGR), a new framework that incorporates LLMs with KGs to mitigate factual hallucination during the reasoning process by retrofitting the initial draft responses of LLMs based on the factual knowledge stored in KGs. Specifically, KGR leverages LLMs to extract, select, validate, and retrofit factual statements within the model-generated responses, which enables an autonomous knowledge verifying and refining procedure without any additional manual efforts. Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks especially when involving complex reasoning processes, which demonstrates the necessity and effectiveness of KGR in mitigating hallucination and enhancing the reliability of LLMs.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.13314
Document Type :
Working Paper