Back to Search Start Over

Qwen2.5 Technical Report

Authors :
Qwen
Yang, An
Yang, Baosong
Zhang, Beichen
Hui, Binyuan
Zheng, Bo
Yu, Bowen
Li, Chengyuan
Liu, Dayiheng
Huang, Fei
Wei, Haoran
Lin, Huan
Yang, Jian
Tu, Jianhong
Zhang, Jianwei
Yang, Jianxin
Yang, Jiaxi
Zhou, Jingren
Lin, Junyang
Dang, Kai
Lu, Keming
Bao, Keqin
Yang, Kexin
Yu, Le
Li, Mei
Xue, Mingfeng
Zhang, Pei
Zhu, Qin
Men, Rui
Lin, Runji
Li, Tianhao
Tang, Tianyi
Xia, Tingyu
Ren, Xingzhang
Ren, Xuancheng
Fan, Yang
Su, Yang
Zhang, Yichang
Wan, Yu
Liu, Yuqiong
Cui, Zeyu
Zhang, Zhenru
Qiu, Zihan
Publication Year :
2024

Abstract

In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.15115
Document Type :
Working Paper