Back to Search Start Over

Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization

Authors :
Zhang, Yizhe
Galley, Michel
Gao, Jianfeng
Gan, Zhe
Li, Xiujun
Brockett, Chris
Dolan, Bill
Publication Year :
2018

Abstract

Responses generated by neural conversational models tend to lack informativeness and diversity. We present Adversarial Information Maximization (AIM), an adversarial learning strategy that addresses these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, our framework explicitly optimizes a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.<br />Comment: NIPS 2018

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1809.05972
Document Type :
Working Paper