Back to Search Start Over

Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths

Authors :
Chia, Yew Ken
Chen, Guizhen
Xu, Weiwen
Tuan, Luu Anh
Poria, Soujanya
Bing, Lidong
Publication Year :
2024

Abstract

Advanced models such as OpenAI o1 exhibit impressive problem-solving capabilities through step-by-step reasoning. However, they may still falter on more complex problems, making errors that disrupt their reasoning paths. We attribute this to the expansive solution space, where each step has the risk of diverging into mistakes. To enhance language model reasoning, we introduce a specialized training framework called Reasoning Paths Optimization (RPO), which enables learning to reason and explore from diverse paths. Our approach encourages favorable branches at each reasoning step while penalizing unfavorable ones, enhancing the model's overall problem-solving performance. Reasoning Paths Optimization does not rely on large-scale human-annotated rationales or outputs from closed-source models, making it scalable and data-efficient. We focus on multi-step reasoning tasks, such as math word problems and science-based exam questions. The experiments demonstrate that our framework significantly enhances the reasoning performance of large language models, with up to 3.1% and 4.3% improvement on GSM8K and MMLU (STEM) respectively. Our data and code can be found at https://reasoning-paths.github.io.<br />Comment: EMNLP 2024 camera ready version

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.10858
Document Type :
Working Paper