Back to Search Start Over

ConvBench: A Multi-Turn Conversation Evaluation Benchmark with Hierarchical Capability for Large Vision-Language Models

Authors :
Liu, Shuo
Ying, Kaining
Zhang, Hao
Yang, Yue
Lin, Yuqi
Zhang, Tianle
Li, Chuanhao
Qiao, Yu
Luo, Ping
Shao, Wenqi
Zhang, Kaipeng
Publication Year :
2024

Abstract

This paper presents ConvBench, a novel multi-turn conversation evaluation benchmark tailored for Large Vision-Language Models (LVLMs). Unlike existing benchmarks that assess individual capabilities in single-turn dialogues, ConvBench adopts a three-level multimodal capability hierarchy, mimicking human cognitive processes by stacking up perception, reasoning, and creativity. Each level focuses on a distinct capability, mirroring the cognitive progression from basic perception to logical reasoning and ultimately to advanced creativity. ConvBench comprises 577 meticulously curated multi-turn conversations encompassing 215 tasks reflective of real-world demands. Automatic evaluations quantify response performance at each turn and overall conversation level. Leveraging the capability hierarchy, ConvBench enables precise attribution of conversation mistakes to specific levels. Experimental results reveal a performance gap between multi-modal models, including GPT4-V, and human performance in multi-turn conversations. Additionally, weak fine-grained perception in multi-modal models contributes to reasoning and creation failures. ConvBench serves as a catalyst for further research aimed at enhancing visual dialogues.

Subjects

Subjects :
Computer Science - Multimedia

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.20194
Document Type :
Working Paper