1. Enhancing Abstractive Summarization with Extracted Knowledge Graphs and Multi-Source Transformers.
- Author
-
Chen, Tong, Wang, Xuewei, Yue, Tianwei, Bai, Xiaoyu, Le, Cindy X., and Wang, Wenping
- Subjects
TEXT summarization ,KNOWLEDGE graphs ,LANGUAGE models ,CHATGPT - Abstract
As the popularity of large language models (LLMs) has risen over the course of the last year, led by GPT-3/4 and especially its productization as ChatGPT, we have witnessed the extensive application of LLMs to text summarization. However, LLMs do not intrinsically have the power to verify the correctness of the information they supply and generate. This research introduces a novel approach to abstractive summarization, aiming to address the limitations of LLMs in that they struggle to understand the truth. The proposed method leverages extracted knowledge graph information and structured semantics as a guide for summarization. Building upon BART, one of the state-of-the-art sequence-to-sequence pre-trained LLMs, multi-source transformer modules are developed as an encoder, which are capable of processing textual and graphical inputs. Decoding is performed based on this enriched encoding to enhance the summary quality. The Wiki-Sum dataset, derived from Wikipedia text dumps, is introduced for evaluation purposes. Comparative experiments with baseline models demonstrate the strengths of the proposed approach in generating informative and relevant summaries. We conclude by presenting our insights into utilizing LLMs with graph external information, which will become a powerful aid towards the goal of factually correct and verified LLMs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF