Back to Search Start Over

A Benchmarking Environment for Reinforcement Learning Based Task Oriented Dialogue Management

Authors :
Casanueva, Iñigo
Budzianowski, Paweł
Su, Pei-Hao
Mrkšić, Nikola
Wen, Tsung-Hsien
Ultes, Stefan
Rojas-Barahona, Lina
Young, Steve
Gašić, Milica
Publication Year :
2017

Abstract

Dialogue assistants are rapidly becoming an indispensable daily aid. To avoid the significant effort needed to hand-craft the required dialogue flow, the Dialogue Management (DM) module can be cast as a continuous Markov Decision Process (MDP) and trained through Reinforcement Learning (RL). Several RL models have been investigated over recent years. However, the lack of a common benchmarking framework makes it difficult to perform a fair comparison between different models and their capability to generalise to different environments. Therefore, this paper proposes a set of challenging simulated environments for dialogue model development and evaluation. To provide some baselines, we investigate a number of representative parametric algorithms, namely deep reinforcement learning algorithms - DQN, A2C and Natural Actor-Critic and compare them to a non-parametric model, GP-SARSA. Both the environments and policy models are implemented using the publicly available PyDial toolkit and released on-line, in order to establish a testbed framework for further experiments and to facilitate experimental reproducibility.<br />Comment: Accepted at the Deep Reinforcement Learning Symposium, 31st Conference on Neural Information Processing Systems (NIPS 2017) Paper updated with minor changes

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1711.11023
Document Type :
Working Paper