Back to Search Start Over

Emergence of human-like polarization among large language model agents

Authors :
Piao, Jinghua
Lu, Zhihong
Gao, Chen
Xu, Fengli
Santos, Fernando P.
Li, Yong
Evans, James
Publication Year :
2025

Abstract

Rapid advances in large language models (LLMs) have empowered autonomous agents to establish social relationships, communicate, and form shared and diverging opinions on political issues. Our understanding of their collective behaviours and underlying mechanisms remains incomplete, however, posing unexpected risks to human society. In this paper, we simulate a networked system involving thousands of large language model agents, discovering their social interactions, guided through LLM conversation, result in human-like polarization. We discover that these agents spontaneously develop their own social network with human-like properties, including homophilic clustering, but also shape their collective opinions through mechanisms observed in the real world, including the echo chamber effect. Similarities between humans and LLM agents -- encompassing behaviours, mechanisms, and emergent phenomena -- raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.05171
Document Type :
Working Paper