Back to Search Start Over

PCGRLLM: Large Language Model-Driven Reward Design for Procedural Content Generation Reinforcement Learning

Authors :
Baek, In-Chang
Kim, Sung-Hyun
Earle, Sam
Jiang, Zehua
Jin-Ha, Noh
Togelius, Julian
Kim, Kyung-Joong
Publication Year :
2025

Abstract

Reward design plays a pivotal role in the training of game AIs, requiring substantial domain-specific knowledge and human effort. In recent years, several studies have explored reward generation for training game agents and controlling robots using large language models (LLMs). In the content generation literature, there has been early work on generating reward functions for reinforcement learning agent generators. This work introduces PCGRLLM, an extended architecture based on earlier work, which employs a feedback mechanism and several reasoning-based prompt engineering techniques. We evaluate the proposed method on a story-to-reward generation task in a two-dimensional environment using two state-of-the-art LLMs, demonstrating the generalizability of our approach. Our experiments provide insightful evaluations that demonstrate the capabilities of LLMs essential for content generation tasks. The results highlight significant performance improvements of 415% and 40% respectively, depending on the zero-shot capabilities of the language model. Our work demonstrates the potential to reduce human dependency in game AI development, while supporting and enhancing creative processes.<br />Comment: 14 pages, 9 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.10906
Document Type :
Working Paper