1. EA-VTR: Event-Aware Video-Text Retrieval
- Author
-
Ma, Zongyang, Zhang, Ziqi, Chen, Yuxin, Qi, Zhongang, Yuan, Chunfeng, Li, Bing, Luo, Yingmin, Li, Xu, Qi, Xiaojuan, Shan, Ying, and Hu, Weiming
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Understanding the content of events occurring in the video and their inherent temporal logic is crucial for video-text retrieval. However, web-crawled pre-training datasets often lack sufficient event information, and the widely adopted video-level cross-modal contrastive learning also struggles to capture detailed and complex video-text event alignment. To address these challenges, we make improvements from both data and model perspectives. In terms of pre-training data, we focus on supplementing the missing specific event content and event temporal transitions with the proposed event augmentation strategies. Based on the event-augmented data, we construct a novel Event-Aware Video-Text Retrieval model, ie, EA-VTR, which achieves powerful video-text retrieval ability through superior video event awareness. EA-VTR can efficiently encode frame-level and video-level visual representations simultaneously, enabling detailed event content and complex event temporal cross-modal alignment, ultimately enhancing the comprehensive understanding of video events. Our method not only significantly outperforms existing approaches on multiple datasets for Text-to-Video Retrieval and Video Action Recognition tasks, but also demonstrates superior event content perceive ability on Multi-event Video-Text Retrieval and Video Moment Retrieval tasks, as well as outstanding event temporal logic understanding ability on Test of Time task., Comment: Accepted by ECCV 2024
- Published
- 2024