1. SED: Self-Evaluation Decoding Enhances Large Language Models for Better Generation
- Author
-
Luo, Ziqin, Han, Haixia, Zhao, Haokun, Jiang, Guochao, Du, Chengyu, Li, Tingyun, Liang, Jiaqing, Yang, Deqing, and Xiao, Yanghua
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Existing Large Language Models (LLMs) generate text through unidirectional autoregressive decoding methods to respond to various user queries. These methods tend to consider token selection in a simple sequential manner, making it easy to fall into suboptimal options when encountering uncertain tokens, referred to as chaotic points in our work. Many chaotic points exist in texts generated by LLMs, and they often significantly affect the quality of subsequently generated tokens, which can interfere with LLMs' generation. This paper proposes Self-Evaluation Decoding, SED, a decoding method for enhancing model generation. Analogous to the human decision-making process, SED integrates speculation and evaluation steps into the decoding process, allowing LLMs to make more careful decisions and thus optimize token selection at chaotic points. Experimental results across various tasks using different LLMs demonstrate SED's effectiveness., Comment: The relevant code will be released in subsequent versions
- Published
- 2024