Back to Search Start Over

Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game

Authors :
Ye, Rong
Zhang, Yongxin
Zhang, Yikai
Kuang, Haoyu
Wei, Zhongyu
Sun, Peng
Publication Year :
2025

Abstract

Achieving Artificial General Intelligence (AGI) requires AI agents that can not only make stratigic decisions but also engage in flexible and meaningful communication. Inspired by Wittgenstein's language game theory in Philosophical Investigations, we propose that language agents can learn through in-context interaction rather than traditional multi-stage frameworks that separate decision-making from language expression. Using Werewolf, a social deduction game that tests language understanding, strategic interaction, and adaptability, we develop the Multi-agent Kahneman & Tversky's Optimization (MaKTO). MaKTO engages diverse models in extensive gameplay to generate unpaired desirable and unacceptable responses, then employs KTO to refine the model's decision-making process. In 9-player Werewolf games, MaKTO achieves a 61% average win rate across various models, outperforming GPT-4o and two-stage RL agents by relative improvements of 23.0% and 10.9%, respectively. Notably, MaKTO also demonstrates human-like performance, winning 60% against expert players and showing only 49% detectability in Turing-style blind tests. These results showcase MaKTO's superior decision-making, strategic adaptation, and natural language generation in complex social deduction games.<br />Comment: Preprint. Code and data will be available at https://reneeye.github.io/MaKTO.html

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.14225
Document Type :
Working Paper