Back to Search Start Over

Robust video question answering via contrastive cross-modality representation learning.

Authors :
Yang, Xun
Zeng, Jianming
Guo, Dan
Wang, Shanshan
Dong, Jianfeng
Wang, Meng
Source :
SCIENCE CHINA Information Sciences; Oct2024, Vol. 67 Issue 10, p1-16, 16p
Publication Year :
2024

Abstract

Video question answering (VideoQA) is a challenging yet important task that requires a joint understanding of low-level video content and high-level textual semantics. Despite the promising progress of existing efforts, recent studies revealed that current VideoQA models mostly tend to over-rely on the superficial correlations rooted in the dataset bias while overlooking the key video content, thus leading to unreliable results. Effectively understanding and modeling the temporal and semantic characteristics of a given video for robust VideoQA is crucial but, to our knowledge, has not been well investigated. To fill the research gap, we propose a robust VideoQA framework that can effectively model the cross-modality fusion and enforce the model to focus on the temporal and global content of videos when making a QA decision instead of exploiting the shortcuts in datasets. Specifically, we design a self-supervised contrastive learning objective to contrast the positive and negative pairs of multimodal input, where the fused representation of the original multimodal input is enforced to be closer to that of the intervened input based on video perturbation. We expect the fused representation to focus more on the global context of videos rather than some static keyframes. Moreover, we introduce an effective temporal order regularization to enforce the inherent sequential structure of videos for video representation. We also design a Kullback-Leibler divergence-based perturbation invariance regularization of the predicted answer distribution to improve the robustness of the model against temporal content perturbation of videos. Our method is model-agnostic and can be easily compatible with various VideoQA backbones. Extensive experimental results and analyses on several public datasets show the advantage of our method over the state-of-the-art methods in terms of both accuracy and robustness. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1674733X
Volume :
67
Issue :
10
Database :
Complementary Index
Journal :
SCIENCE CHINA Information Sciences
Publication Type :
Academic Journal
Accession number :
179956339
Full Text :
https://doi.org/10.1007/s11432-023-4084-6