1. Causal Document-Grounded Dialogue Pre-training
- Author
-
Zhao, Yingxiu, Yu, Bowen, Yu, Haiyang, Li, Bowen, Li, Jinyang, Wang, Chao, Huang, Fei, Li, Yongbin, Zhang, Nevin Lianwen, Zhao, Yingxiu, Yu, Bowen, Yu, Haiyang, Li, Bowen, Li, Jinyang, Wang, Chao, Huang, Fei, Li, Yongbin, and Zhang, Nevin Lianwen
- Abstract
The goal of document-grounded dialogue (DocGD) is to generate a response by anchoring the evidence in a supporting document in accordance with the dialogue context. This entails four causally interconnected variables. While task-specific pre-training has significantly enhanced performances on numerous downstream tasks, existing DocGD methods still rely on general pre-trained language models without a specifically tailored pre-training approach that explicitly captures the causal relationships. To address this, we present the first causally-complete dataset construction strategy for developing million-scale DocGD pre-training corpora. Additionally, we propose a causally-perturbed pre-training strategy to better capture causality by introducing perturbations on the variables and optimizing the overall causal effect. Experiments conducted on three benchmark datasets demonstrate that our causal pre-training yields substantial and consistent improvements in fully-supervised, low-resource, few-shot, and zero-shot settings.
- Published
- 2023