Back to Search Start Over

VELOCITI: Can Video-Language Models Bind Semantic Concepts through Time?

Authors :
Saravanan, Darshana
Singh, Darshan
Gupta, Varun
Khan, Zeeshan
Gandhi, Vineet
Tapaswi, Makarand
Publication Year :
2024

Abstract

Compositionality is a fundamental aspect of vision-language understanding and is especially required for videos since they contain multiple entities (e.g. persons, actions, and scenes) interacting dynamically over time. Existing benchmarks focus primarily on perception capabilities. However, they do not study binding, the ability of a model to associate entities through appropriate relationships. To this end, we propose VELOCITI, a new benchmark building on complex movie clips and dense semantic role label annotations to test perception and binding in video language models (contrastive and Video-LLMs). Our perception-based tests require discriminating video-caption pairs that share similar entities, and the binding tests require models to associate the correct entity to a given situation while ignoring the different yet plausible entities that also appear in the same video. While current state-of-the-art models perform moderately well on perception tests, accuracy is near random when both entities are present in the same video, indicating that they fail at binding tests. Even the powerful Gemini 1.5 Flash has a substantial gap (16-28%) with respect to human accuracy in such binding tests.<br />Comment: 26 pages, 17 figures, 3 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.10889
Document Type :
Working Paper