Back to Search Start Over

Enhancing Large Language Model Induced Task-Oriented Dialogue Systems Through Look-Forward Motivated Goals

Authors :
Hu, Zhiyuan
Feng, Yue
Deng, Yang
Li, Zekun
Ng, See-Kiong
Luu, Anh Tuan
Hooi, Bryan
Hu, Zhiyuan
Feng, Yue
Deng, Yang
Li, Zekun
Ng, See-Kiong
Luu, Anh Tuan
Hooi, Bryan
Publication Year :
2023

Abstract

Recently, the development of large language models (LLMs) has been significantly enhanced the question answering and dialogue generation, and makes them become increasingly popular in current practical scenarios. While unlike the general dialogue system which emphasizes the semantic performance, the task-oriented dialogue (ToD) systems aim to achieve the dialogue goal efficiently and successfully in multiple turns. Unfortunately, existing LLM-induced ToD systems lack the direct reward toward the final goal and do not take account of the dialogue proactivity that can strengthen the dialogue efficiency. To fill these gaps, we introduce the ProToD (Proactively Goal-Driven LLM-Induced ToD) approach, which anticipates the future dialogue actions and incorporates the goal-oriented reward signal to enhance ToD systems. Additionally, we present a novel evaluation method that assesses ToD systems based on goal-driven dialogue simulations. This method allows us to gauge user satisfaction, system efficiency and successful rate while overcoming the limitations of current Information and Success metrics. Empirical experiments conducted on the MultiWoZ 2.1 dataset demonstrate that our model can achieve superior performance using only 10% of the data compared to previous end-to-end fully supervised models. This improvement is accompanied by enhanced user satisfaction and efficiency.<br />Comment: 7 Pages

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438479819
Document Type :
Electronic Resource