1. Personalized Multimodal Large Language Models: A Survey
- Author
-
Wu, Junda, Lyu, Hanjia, Xia, Yu, Zhang, Zhehao, Barrow, Joe, Kumar, Ishita, Mirtaheri, Mehrnoosh, Chen, Hongjie, Rossi, Ryan A., Dernoncourt, Franck, Yu, Tong, Zhang, Ruiyi, Gu, Jiuxiang, Ahmed, Nesreen K., Wang, Yu, Chen, Xiang, Deilamsalehy, Hanieh, Park, Namyong, Kim, Sungchul, Yang, Huanrui, Mitra, Subrata, Hu, Zhengmian, Lipka, Nedim, Nguyen, Dang, Zhao, Yue, Luo, Jiebo, and McAuley, Julian
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Information Retrieval - Abstract
Multimodal Large Language Models (MLLMs) have become increasingly important due to their state-of-the-art performance and ability to integrate multiple data modalities, such as text, images, and audio, to perform complex tasks with high accuracy. This paper presents a comprehensive survey on personalized multimodal large language models, focusing on their architecture, training methods, and applications. We propose an intuitive taxonomy for categorizing the techniques used to personalize MLLMs to individual users, and discuss the techniques accordingly. Furthermore, we discuss how such techniques can be combined or adapted when appropriate, highlighting their advantages and underlying rationale. We also provide a succinct summary of personalization tasks investigated in existing research, along with the evaluation metrics commonly used. Additionally, we summarize the datasets that are useful for benchmarking personalized MLLMs. Finally, we outline critical open challenges. This survey aims to serve as a valuable resource for researchers and practitioners seeking to understand and advance the development of personalized multimodal large language models.
- Published
- 2024