1. Beyond Instruction Following: Evaluating Inferential Rule Following of Large Language Models
- Author
-
Sun, Wangtao, Zhang, Chenxiang, Zhang, XueYou, Yu, Xuanqing, Huang, Ziyang, Chen, Pei, Xu, Haotian, He, Shizhu, Zhao, Jun, and Liu, Kang
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Although Large Language Models (LLMs) have demonstrated strong ability, they are further supposed to be controlled and guided by in real-world scenarios to be safe, accurate, and intelligent. This demands the possession of capability of LLMs. However, no prior work has made a clear evaluation of the inferential rule-following capability of LLMs. Previous studies that try to evaluate the inferential rule-following capability of LLMs fail to distinguish the inferential rule-following scenarios from the instruction-following scenarios. Therefore, this paper first clarifies the concept of inferential rule-following and proposes a comprehensive benchmark, RuleBench, to evaluate a diversified range of inferential rule-following abilities. Our experimental results on a variety of LLMs show that they are still limited in following rules. Our analysis based on the evaluation results provides insights into the improvements for LLMs toward a better inferential rule-following intelligent agent. We further propose Inferential Rule-Following Tuning (IRFT). The experimental results show that through IRFT, LLMs can learn abstract rule-following abilities from purely synthetic data and then generalize to RuleBench. The data and code can be found at: https://anonymous.4open.science/r/llm-rule-following-B3E3/
- Published
- 2024