Back to Search Start Over

PolyLM: An Open Source Polyglot Large Language Model

Authors :
Wei, Xiangpeng
Wei, Haoran
Lin, Huan
Li, Tianhao
Zhang, Pei
Ren, Xingzhang
Li, Mei
Wan, Yu
Cao, Zhiwei
Xie, Binbin
Hu, Tianxiang
Li, Shangjie
Hui, Binyuan
Yu, Bowen
Liu, Dayiheng
Yang, Baosong
Huang, Fei
Xie, Jun
Publication Year :
2023

Abstract

Large language models (LLMs) demonstrate remarkable ability to comprehend, reason, and generate following nature language instructions. However, the development of LLMs has been primarily focused on high-resource languages, such as English, thereby limiting their applicability and research in other languages. Consequently, we present PolyLM, a multilingual LLM trained on 640 billion (B) tokens, avaliable in two model sizes: 1.7B and 13B. To enhance its multilingual capabilities, we 1) integrate bilingual data into training data; and 2) adopt a curriculum learning strategy that increases the proportion of non-English data from 30% in the first stage to 60% in the final stage during pre-training. Further, we propose a multilingual self-instruct method which automatically generates 132.7K diverse multilingual instructions for model fine-tuning. To assess the model's performance, we collect several existing multilingual tasks, including multilingual understanding, question answering, generation, and translation. Extensive experiments show that PolyLM surpasses other open-source models such as LLaMA and BLOOM on multilingual tasks while maintaining comparable performance in English. Our models, alone with the instruction data and multilingual benchmark, are available at: \url{https://modelscope.cn/models/damo/nlp_polylm_13b_text_generation}.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.06018
Document Type :
Working Paper