Back to Search Start Over

Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts

Authors :
Zhou, Ben
Richardson, Kyle
Yu, Xiaodong
Roth, Dan
Publication Year :
2022

Abstract

Explicit decomposition modeling, which involves breaking down complex tasks into more straightforward and often more interpretable sub-tasks, has long been a central theme in developing robust and interpretable NLU systems. However, despite the many datasets and resources built as part of this effort, the majority have small-scale annotations and limited scope, which is insufficient to solve general decomposition tasks. In this paper, we look at large-scale intermediate pre-training of decomposition-based transformers using distant supervision from comparable texts, particularly large-scale parallel news. We show that with such intermediate pre-training, developing robust decomposition-based models for a diverse range of tasks becomes more feasible. For example, on semantic parsing, our model, DecompT5, improves 20% to 30% on two datasets, Overnight and TORQUE, over the baseline language model. We further use DecompT5 to build a novel decomposition-based QA system named DecompEntail, improving over state-of-the-art models, including GPT-3, on both HotpotQA and StrategyQA by 8% and 4%, respectively.<br />Comment: Accepted at EMNLP 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.16865
Document Type :
Working Paper