Back to Search Start Over

Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue

Authors :
Zhao, Yingxiu
Zheng, Yinhe
Tian, Zhiliang
Gao, Chang
Yu, Bowen
Yu, Haiyang
Li, Yongbin
Sun, Jian
Zhang, Nevin L.
Publication Year :
2022

Abstract

Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems. To address the catastrophic forgetting issue of LL, generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples. However, most existing generative replay methods use only a single task-specific token to control their models. This scheme is usually not strong enough to constrain the generative model due to insufficient information involved. In this paper, we propose a novel method, prompt conditioned VAE for lifelong learning (PCLL), to enhance generative replay by incorporating tasks' statistics. PCLL captures task-specific distributions with a conditional variational autoencoder, conditioned on natural language prompts to guide the pseudo-sample generation. Moreover, it leverages a distillation process to further consolidate past knowledge by alleviating the noise in pseudo samples. Experiments on natural language understanding tasks of ToD systems demonstrate that PCLL significantly outperforms competitive baselines in building LL models.<br />Comment: EMNLP2022 Long Paper (Main Track)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.07783
Document Type :
Working Paper