51. Calculating Question Similarity is Enough: A New Method for KBQA Tasks
- Author
-
Zhao, Hanyu, Yuan, Sha, Leng, Jiahong, Pan, Xiang, Wang, Guoqiang, Wu, Ledell, and Tang, Jie
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Knowledge Base Question Answering (KBQA) aims to answer natural language questions with the help of an external knowledge base. The core idea is to find the link between the internal knowledge behind questions and known triples of the knowledge base. Traditional KBQA task pipelines contain several steps, including entity recognition, entity linking, answering selection, etc. In this kind of pipeline methods, errors in any procedure will inevitably propagate to the final prediction. To address this challenge, this paper proposes a Corpus Generation - Retrieve Method (CGRM) with Pre-training Language Model (PLM) for the KBQA task. The major novelty lies in the design of the new method, wherein our approach, the knowledge enhanced T5 (kT5) model aims to generate natural language QA pairs based on Knowledge Graph triples and directly solve the QA by retrieving the synthetic dataset. The new method can extract more information about the entities from PLM to improve accuracy and simplify the processes. We test our method on NLPCC-ICCPOL 2016 KBQA dataset, and the results show that our method improves the performance of KBQA and the out straight-forward method is competitive with the state-of-the-art., Comment: We want to withdraw this submission, and add some experiments to make it more valuable
- Published
- 2021