Back to Search Start Over

Survey on Large Language Model-Enhanced Reinforcement Learning: Concept, Taxonomy, and Methods

Authors :
Cao, Yuji
Zhao, Huan
Cheng, Yuheng
Shu, Ting
Chen, Yue
Liu, Guolong
Liang, Gaoqi
Zhao, Junhua
Yan, Jinyue
Li, Yun
Publication Year :
2024

Abstract

With extensive pre-trained knowledge and high-level general capabilities, large language models (LLMs) emerge as a promising avenue to augment reinforcement learning (RL) in aspects such as multi-task learning, sample efficiency, and high-level task planning. In this survey, we provide a comprehensive review of the existing literature in LLM-enhanced RL and summarize its characteristics compared to conventional RL methods, aiming to clarify the research scope and directions for future studies. Utilizing the classical agent-environment interaction paradigm, we propose a structured taxonomy to systematically categorize LLMs' functionalities in RL, including four roles: information processor, reward designer, decision-maker, and generator. For each role, we summarize the methodologies, analyze the specific RL challenges that are mitigated, and provide insights into future directions. Lastly, a comparative analysis of each role, potential applications, prospective opportunities, and challenges of the LLM-enhanced RL are discussed. By proposing this taxonomy, we aim to provide a framework for researchers to effectively leverage LLMs in the RL field, potentially accelerating RL applications in complex applications such as robotics, autonomous driving, and energy systems.<br />Comment: 22 pages (including bibliography), 6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.00282
Document Type :
Working Paper