Back to Search Start Over

ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models

Authors :
Luo, Haoran
E, Haihong
Tang, Zichen
Peng, Shiyao
Guo, Yikai
Zhang, Wentai
Ma, Chenghao
Dong, Guanting
Song, Meina
Lin, Wei
Zhu, Yifan
Tuan, Luu Anh
Source :
ACL 2024
Publication Year :
2023

Abstract

Knowledge Base Question Answering (KBQA) aims to answer natural language questions over large-scale knowledge bases (KBs), which can be summarized into two crucial steps: knowledge retrieval and semantic parsing. However, three core challenges remain: inefficient knowledge retrieval, mistakes of retrieval adversely impacting semantic parsing, and the complexity of previous KBQA methods. To tackle these challenges, we introduce ChatKBQA, a novel and simple generate-then-retrieve KBQA framework, which proposes first generating the logical form with fine-tuned LLMs, then retrieving and replacing entities and relations with an unsupervised retrieval method, to improve both generation and retrieval more directly. Experimental results show that ChatKBQA achieves new state-of-the-art performance on standard KBQA datasets, WebQSP, and CWQ. This work can also be regarded as a new paradigm for combining LLMs with knowledge graphs (KGs) for interpretable and knowledge-required question answering. Our code is publicly available.<br />Comment: Accepted by Findings of ACL 2024

Details

Database :
arXiv
Journal :
ACL 2024
Publication Type :
Report
Accession number :
edsarx.2310.08975
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2024.findings-acl.122