1. PSPO*: An Effective Process-supervised Policy Optimization for Reasoning Alignment
- Author
-
Li, Jiawei, Liang, Xinyue, Yang, Yizhe, Feng, Chong, and Gao, Yang
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Process supervision enhances the performance of large language models in reasoning tasks by providing feedback at each step of chain-of-thought reasoning. However, due to the lack of effective process supervision methods, even advanced large language models are prone to logical errors and redundant reasoning. We claim that the effectiveness of process supervision significantly depends on both the accuracy and the length of reasoning chains. Moreover, we identify that these factors exhibit a nonlinear relationship with the overall reward score of the reasoning process. Inspired by these insights, we propose a novel process supervision paradigm, PSPO*, which systematically outlines the workflow from reward model training to policy optimization, and highlights the importance of nonlinear rewards in process supervision. Based on PSPO*, we develop the PSPO-WRS, which considers the number of reasoning steps in determining reward scores and utilizes an adjusted Weibull distribution for nonlinear reward shaping. Experimental results on six mathematical reasoning datasets demonstrate that PSPO-WRS consistently outperforms current mainstream models., Comment: Our code can be found at https://github.com/DIRECT-BIT/PSPO
- Published
- 2024