1. Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
- Author
-
Jones, Cameron R. and Bergen, Benjamin K.
- Subjects
Computer Science - Computation and Language ,Computer Science - Computers and Society ,Computer Science - Human-Computer Interaction ,68T50 ,K.4.0 ,I.2.7 ,H.5.2 - Abstract
Large Language Models (LLMs) can generate content that is as persuasive as human-written text and appear capable of selectively producing deceptive outputs. These capabilities raise concerns about potential misuse and unintended consequences as these systems become more widely deployed. This review synthesizes recent empirical work examining LLMs' capacity and proclivity for persuasion and deception, analyzes theoretical risks that could arise from these capabilities, and evaluates proposed mitigations. While current persuasive effects are relatively small, various mechanisms could increase their impact, including fine-tuning, multimodality, and social factors. We outline key open questions for future research, including how persuasive AI systems might become, whether truth enjoys an inherent advantage over falsehoods, and how effective different mitigation strategies may be in practice., Comment: 37 pages, 1 figure
- Published
- 2024