Back to Search Start Over

Exploring the Compositional Deficiency of Large Language Models in Mathematical Reasoning

Authors :
Zhao, Jun
Tong, Jingqi
Mou, Yurong
Zhang, Ming
Zhang, Qi
Huang, Xuanjing
Publication Year :
2024

Abstract

Human cognition exhibits systematic compositionality, the algebraic ability to generate infinite novel combinations from finite learned components, which is the key to understanding and reasoning about complex logic. In this work, we investigate the compositionality of large language models (LLMs) in mathematical reasoning. Specifically, we construct a new dataset \textsc{MathTrap} by introducing carefully designed logical traps into the problem descriptions of MATH and GSM8K. Since problems with logical flaws are quite rare in the real world, these represent "unseen" cases to LLMs. Solving these requires the models to systematically compose (1) the mathematical knowledge involved in the original problems with (2) knowledge related to the introduced traps. Our experiments show that while LLMs possess both components of requisite knowledge, they do not \textbf{spontaneously} combine them to handle these novel cases. We explore several methods to mitigate this deficiency, such as natural language prompts, few-shot demonstrations, and fine-tuning. Additionally, we test the recently released OpenAI o1 model and find that human-like `slow thinking' helps improve the compositionality of LLMs. Overall, systematic compositionality remains an open challenge for large language models.<br />Comment: Accepted by EMNLP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.06680
Document Type :
Working Paper