1. Enhancing Semantic Understanding with Self-supervised Methods for Abstractive Dialogue Summarization
- Author
-
Jaewoong Yun, Hyun-Jin Choi, Hyun-Jae Lee, Youngjune Gwon, and Seongho Joe
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Computation and Language ,business.industry ,Computer science ,Computer Science - Artificial Intelligence ,Artificial intelligence ,computer.software_genre ,business ,computer ,Automatic summarization ,Computation and Language (cs.CL) ,Natural language processing - Abstract
Contextualized word embeddings can lead to state-of-the-art performances in natural language understanding. Recently, a pre-trained deep contextualized text encoder such as BERT has shown its potential in improving natural language tasks including abstractive summarization. Existing approaches in dialogue summarization focus on incorporating a large language model into summarization task trained on large-scale corpora consisting of news articles rather than dialogues of multiple speakers. In this paper, we introduce self-supervised methods to compensate shortcomings to train a dialogue summarization model. Our principle is to detect incoherent information flows using pretext dialogue text to enhance BERT's ability to contextualize the dialogue text representations. We build and fine-tune an abstractive dialogue summarization model on a shared encoder-decoder architecture using the enhanced BERT. We empirically evaluate our abstractive dialogue summarizer with the SAMSum corpus, a recently introduced dataset with abstractive dialogue summaries. All of our methods have contributed improvements to abstractive summary measured in ROUGE scores. Through an extensive ablation study, we also present a sensitivity analysis to critical model hyperparameters, probabilities of switching utterances and masking interlocutors., 5 pages, 3 figures, INTERSPEECH 2021
- Published
- 2022