Back to Search Start Over

Boosting Deductive Reasoning with Step Signals In RLHF

Authors :
Li, Jialian
Zhang, Yipin
Shen, Wei
Yan, Yuzi
Xie, Jian
Yan, Dong
Publication Year :
2024

Abstract

Logical reasoning is a crucial task for Large Language Models (LLMs), enabling them to tackle complex problems. Among reasoning tasks, multi-step reasoning poses a particular challenge. Grounded in the theory of formal logic, we have developed an automated method, Multi-step Deduction (MuseD), for deductive reasoning data. MuseD has allowed us to create training and testing datasets for multi-step reasoning. Our generation method enables control over the complexity of the generated instructions, facilitating training and evaluation of models across different difficulty levels. Through RLHF training, our training data has demonstrated significant improvements in logical capabilities for both in-domain of out-of-domain reasoning tasks. Additionally, we have conducted tests to assess the multi-step reasoning abilities of various models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.09528
Document Type :
Working Paper