Back to Search Start Over

Natural Language Reinforcement Learning

Authors :
Feng, Xidong
Wan, Ziyu
Yang, Mengyue
Wang, Ziyan
Koushik, Girish A.
Du, Yali
Wen, Ying
Wang, Jun
Publication Year :
2024

Abstract

Reinforcement Learning (RL) has shown remarkable abilities in learning policies for decision-making tasks. However, RL is often hindered by issues such as low sample efficiency, lack of interpretability, and sparse supervision signals. To tackle these limitations, we take inspiration from the human learning process and introduce Natural Language Reinforcement Learning (NLRL), which innovatively combines RL principles with natural language representation. Specifically, NLRL redefines RL concepts like task objectives, policy, value function, Bellman equation, and policy iteration in natural language space. We present how NLRL can be practically implemented with the latest advancements in large language models (LLMs) like GPT-4. Initial experiments over tabular MDPs demonstrate the effectiveness, efficiency, and also interpretability of the NLRL framework.<br />Comment: Work in Progress

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.07157
Document Type :
Working Paper