Back to Search Start Over

Self-Supervised Dialogue Learning for Spoken Conversational Question Answering

Authors :
Nuo Chen
Chenyu You
Yuexian Zou
Source :
Interspeech 2021.
Publication Year :
2021
Publisher :
ISCA, 2021.

Abstract

In spoken conversational question answering (SCQA), the answer to the corresponding question is generated by retrieving and then analyzing a fixed spoken document, including multi-part conversations. Most SCQA systems have considered only retrieving information from ordered utterances. However, the sequential order of dialogue is important to build a robust spoken conversational question answering system, and the changes of utterances order may severely result in low-quality and incoherent corpora. To this end, we introduce a self-supervised learning approach, including incoherence discrimination, insertion detection, and question prediction, to explicitly capture the coreference resolution and dialogue coherence among spoken documents. Specifically, we design a joint learning framework where the auxiliary self-supervised tasks can enable the pre-trained SCQA systems towards more coherent and meaningful spoken dialogue learning. We also utilize the proposed self-supervised learning tasks to capture intra-sentence coherence. Experimental results demonstrate that our proposed method provides more coherent, meaningful, and appropriate responses, yielding superior performance gains compared to the original pre-trained language models. Our method achieves state-of-the-art results on the Spoken-CoQA dataset.<br />Comment: To Appear Interspeech 2021

Details

Database :
OpenAIRE
Journal :
Interspeech 2021
Accession number :
edsair.doi.dedup.....0e7b6b9829ed6667f0e26822480c43b5