Back to Search Start Over

EPO: Hierarchical LLM Agents with Environment Preference Optimization

Authors :
Zhao, Qi
Fu, Haotian
Sun, Chen
Konidaris, George
Publication Year :
2024

Abstract

Long-horizon decision-making tasks present significant challenges for LLM-based agents due to the need for extensive planning over multiple steps. In this paper, we propose a hierarchical framework that decomposes complex tasks into manageable subgoals, utilizing separate LLMs for subgoal prediction and low-level action generation. To address the challenge of creating training signals for unannotated datasets, we develop a reward model that leverages multimodal environment feedback to automatically generate reward signals. We introduce Environment Preference Optimization (EPO), a novel method that generates preference signals from the environment's feedback and uses them to train LLM-based agents. Extensive experiments on ALFRED demonstrate the state-of-the-art performance of our framework, achieving first place on the ALFRED public leaderboard and showcasing its potential to improve long-horizon decision-making in diverse environments.<br />Comment: EMNLP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.16090
Document Type :
Working Paper