Back to Search Start Over

Dynamic Dialogue Policy for Continual Reinforcement Learning

Authors :
Geishauser, Christian
van Niekerk, Carel
Lubis, Nurul
Heck, Michael
Lin, Hsien-Chin
Feng, Shutong
Gašić, Milica
Publication Year :
2022

Abstract

Continual learning is one of the key components of human learning and a necessary requirement of artificial intelligence. As dialogue can potentially span infinitely many topics and tasks, a task-oriented dialogue system must have the capability to continually learn, dynamically adapting to new challenges while preserving the knowledge it already acquired. Despite the importance, continual reinforcement learning of the dialogue policy has remained largely unaddressed. The lack of a framework with training protocols, baseline models and suitable metrics, has so far hindered research in this direction. In this work we fill precisely this gap, enabling research in dialogue policy optimisation to go from static to dynamic learning. We provide a continual learning algorithm, baseline architectures and metrics for assessing continual learning models. Moreover, we propose the dynamic dialogue policy transformer (DDPT), a novel dynamic architecture that can integrate new knowledge seamlessly, is capable of handling large state spaces and obtains significant zero-shot performance when being exposed to unseen domains, without any growth in network parameter size.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.05928
Document Type :
Working Paper