Back to Search Start Over

MathBench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark

Authors :
Liu, Hongwei
Zheng, Zilong
Qiao, Yuxuan
Duan, Haodong
Fei, Zhiwei
Zhou, Fengzhe
Zhang, Wenwei
Zhang, Songyang
Lin, Dahua
Chen, Kai
Publication Year :
2024

Abstract

Recent advancements in large language models (LLMs) have showcased significant improvements in mathematics. However, traditional math benchmarks like GSM8k offer a unidimensional perspective, falling short in providing a holistic assessment of the LLMs' math capabilities. To address this gap, we introduce MathBench, a new benchmark that rigorously assesses the mathematical capabilities of large language models. MathBench spans a wide range of mathematical disciplines, offering a detailed evaluation of both theoretical understanding and practical problem-solving skills. The benchmark progresses through five distinct stages, from basic arithmetic to college mathematics, and is structured to evaluate models at various depths of knowledge. Each stage includes theoretical questions and application problems, allowing us to measure a model's mathematical proficiency and its ability to apply concepts in practical scenarios. MathBench aims to enhance the evaluation of LLMs' mathematical abilities, providing a nuanced view of their knowledge understanding levels and problem solving skills in a bilingual context. The project is released at https://github.com/open-compass/MathBench .<br />Comment: Project: https://github.com/open-compass/MathBench

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.12209
Document Type :
Working Paper