1. RATIONALYST: Pre-training Process-Supervision for Improving Reasoning
- Author
-
Jiang, Dongwei, Wang, Guoxuan, Lu, Yining, Wang, Andrew, Zhang, Jingyu, Liu, Chuyu, Van Durme, Benjamin, and Khashabi, Daniel
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
The reasoning steps generated by LLMs might be incomplete, as they mimic logical leaps common in everyday communication found in their pre-training data: underlying rationales are frequently left implicit (unstated). To address this challenge, we introduce RATIONALYST, a model for process-supervision of reasoning based on pre-training on a vast collection of rationale annotations extracted from unlabeled data. We extract 79k rationales from web-scale unlabelled dataset (the Pile) and a combination of reasoning datasets with minimal human intervention. This web-scale pre-training for reasoning allows RATIONALYST to consistently generalize across diverse reasoning tasks, including mathematical, commonsense, scientific, and logical reasoning. Fine-tuned from LLaMa-3-8B, RATIONALYST improves the accuracy of reasoning by an average of 3.9% on 7 representative reasoning benchmarks. It also demonstrates superior performance compared to significantly larger verifiers like GPT-4 and similarly sized models fine-tuned on matching training sets., Comment: Our code, data, and model can be found at this repository: https://github.com/JHU-CLSP/Rationalyst
- Published
- 2024