Back to Search Start Over

Commonsense-Focused Dialogues for Response Generation: An Empirical Study

Authors :
Zhou, Pei
Gopalakrishnan, Karthik
Hedayatnia, Behnam
Kim, Seokhwan
Pujara, Jay
Ren, Xiang
Liu, Yang
Hakkani-Tur, Dilek
Publication Year :
2021

Abstract

Smooth and effective communication requires the ability to perform latent or explicit commonsense inference. Prior commonsense reasoning benchmarks (such as SocialIQA and CommonsenseQA) mainly focus on the discriminative task of choosing the right answer from a set of candidates, and do not involve interactive language generation as in dialogue. Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet. In this paper, we present an empirical study of commonsense in dialogue response generation. We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph. Furthermore, building on social contexts/situations in SocialIQA, we collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting. We evaluate response generation models trained using these datasets and find that models trained on both extracted and our collected data produce responses that consistently exhibit more commonsense than baselines. Finally we propose an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pre-trained language and dialog models, and show reasonable correlation with human evaluation of responses' commonsense quality. We are releasing a subset of our collected data, Commonsense-Dialogues, containing about 11K dialogs.<br />Comment: Accepted at SIGDIAL 2021. 12 pages, 5 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.06427
Document Type :
Working Paper