Back to Search Start Over

InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning

Authors :
Ying, Huaiyuan
Zhang, Shuo
Li, Linyang
Zhou, Zhejian
Shao, Yunfan
Fei, Zhaoye
Ma, Yichuan
Hong, Jiawei
Liu, Kuikun
Wang, Ziyi
Wang, Yudong
Wu, Zijian
Li, Shuaibin
Zhou, Fengzhe
Liu, Hongwei
Zhang, Songyang
Zhang, Wenwei
Yan, Hang
Qiu, Xipeng
Wang, Jiayu
Chen, Kai
Lin, Dahua
Publication Year :
2024

Abstract

The math abilities of large language models can represent their abstract reasoning ability. In this paper, we introduce and open-source our math reasoning LLMs InternLM-Math which is continue pre-trained from InternLM2. We unify chain-of-thought reasoning, reward modeling, formal reasoning, data augmentation, and code interpreter in a unified seq2seq format and supervise our model to be a versatile math reasoner, verifier, prover, and augmenter. These abilities can be used to develop the next math LLMs or self-iteration. InternLM-Math obtains open-sourced state-of-the-art performance under the setting of in-context learning, supervised fine-tuning, and code-assisted reasoning in various informal and formal benchmarks including GSM8K, MATH, Hungary math exam, MathBench-ZH, and MiniF2F. Our pre-trained model achieves 30.3 on the MiniF2F test set without fine-tuning. We further explore how to use LEAN to solve math problems and study its performance under the setting of multi-task learning which shows the possibility of using LEAN as a unified platform for solving and proving in math. Our models, codes, and data are released at \url{https://github.com/InternLM/InternLM-Math}.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.06332
Document Type :
Working Paper