Back to Search Start Over

Assessing and Understanding Creativity in Large Language Models

Authors :
Zhao, Yunpu
Zhang, Rui
Li, Wenyi
Huang, Di
Guo, Jiaming
Peng, Shaohui
Hao, Yifan
Wen, Yuanbo
Hu, Xing
Du, Zidong
Guo, Qi
Li, Ling
Chen, Yunji
Publication Year :
2024

Abstract

In the field of natural language processing, the rapid development of large language model (LLM) has attracted more and more attention. LLMs have shown a high level of creativity in various tasks, but the methods for assessing such creativity are inadequate. The assessment of LLM creativity needs to consider differences from humans, requiring multi-dimensional measurement while balancing accuracy and efficiency. This paper aims to establish an efficient framework for assessing the level of creativity in LLMs. By adapting the modified Torrance Tests of Creative Thinking, the research evaluates the creative performance of various LLMs across 7 tasks, emphasizing 4 criteria including Fluency, Flexibility, Originality, and Elaboration. In this context, we develop a comprehensive dataset of 700 questions for testing and an LLM-based evaluation method. In addition, this study presents a novel analysis of LLMs' responses to diverse prompts and role-play situations. We found that the creativity of LLMs primarily falls short in originality, while excelling in elaboration. Besides, the use of prompts and the role-play settings of the model significantly influence creativity. Additionally, the experimental results also indicate that collaboration among multiple LLMs can enhance originality. Notably, our findings reveal a consensus between human evaluations and LLMs regarding the personality traits that influence creativity. The findings underscore the significant impact of LLM design on creativity and bridges artificial intelligence and human creativity, offering insights into LLMs' creativity and potential applications.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.12491
Document Type :
Working Paper