Back to Search Start Over

Finetuned Language Models Are Zero-Shot Learners

Authors :
Wei, Jason
Bosma, Maarten
Zhao, Vincent Y.
Guu, Kelvin
Yu, Adams Wei
Lester, Brian
Du, Nan
Dai, Andrew M.
Le, Quoc V.
Publication Year :
2021

Abstract

This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.<br />Comment: Version 5. Find list of changes in Appendix F (page 35)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.01652
Document Type :
Working Paper