Back to Search Start Over

Context Generation Improves Open Domain Question Answering

Authors :
Su, Dan
Patwary, Mostofa
Prabhumoye, Shrimai
Xu, Peng
Prenger, Ryan
Shoeybi, Mohammad
Fung, Pascale Ngan
Anandkumar, Anima
Catanzaro, Bryan
Su, Dan
Patwary, Mostofa
Prabhumoye, Shrimai
Xu, Peng
Prenger, Ryan
Shoeybi, Mohammad
Fung, Pascale Ngan
Anandkumar, Anima
Catanzaro, Bryan
Publication Year :
2023

Abstract

Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge. However, they do not fully exploit the parameterized knowledge. To address this inefficiency, we propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract the relevant knowledge and answer a question. We first generate a related context for a given question by prompting a pretrained LM. We then prompt the same LM to generate an answer using the generated context and the question. Additionally, we marginalize over the generated contexts to improve the accuracies and reduce context uncertainty. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods. For example on TriviaQA, our method improves exact match accuracy from 55.3% to 68.6%, and is on par with open-book QA methods (68.6% vs. 68.0%). Our results show that our new methodology is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning, and paves the way for hybrid models that integrate pretrained LMs with external knowledge. © 2023 Association for Computational Linguistics.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1394208535
Document Type :
Electronic Resource