1. AI chatbot responds to emotional cuing
- Author
-
Zhao Yukun, Liying Xu, Zhen Huang, Kaiping Peng, Martin Seligman, Evelyn Li, and Feng Yu
- Abstract
Emotion has long been considered to distinguish humans from Artificial Intelligence (AI). Previously, AI's ability to interpret and express emotions was seen as mere text interpretation. In humans, emotions co-ordinate a suite of behavioral actions, e.g., under negative emotion being risk averse or under positive emotion being generous. So, we investigated such coordination to emotional cues in AI chatbots. We treated AI chatbots like human participants, prompting them with scenarios that prime positive emotions, negative emotions, or no emotions. Multiple OpenAI ChatGPT Plus accounts answered questions on investment decisions and prosocial tendencies. We found that ChatGPT-4 bots primed with positive emotions, negative emotions, and no emotions exhibited different risk-taking and prosocial actions. These effects were weaker among ChatGPT-3.5 bots. The ability to coordinate responses with emotional cues may have become stronger in large language models as they evolved. This highlights the potential of influencing AI using emotion and it suggests that complex AI possesses a necessary capacity for “having” emotion.
- Published
- 2023