Back to Search Start Over

Top-down Activity Representation Learning for Video Question Answering

Authors :
Wang, Yanan
Haruta, Shuichiro
Zeng, Donghuo
Vizcarra, Julio
Kurokawa, Mori
Publication Year :
2024

Abstract

Capturing complex hierarchical human activities, from atomic actions (e.g., picking up one present, moving to the sofa, unwrapping the present) to contextual events (e.g., celebrating Christmas) is crucial for achieving high-performance video question answering (VideoQA). Recent works have expanded multimodal models (e.g., CLIP, LLaVA) to process continuous video sequences, enhancing the model's temporal reasoning capabilities. However, these approaches often fail to capture contextual events that can be decomposed into multiple atomic actions non-continuously distributed over relatively long-term sequences. In this paper, to leverage the spatial visual context representation capability of the CLIP model for obtaining non-continuous visual representations in terms of contextual events in videos, we convert long-term video sequences into a spatial image domain and finetune the multimodal model LLaVA for the VideoQA task. Our approach achieves competitive performance on the STAR task, in particular, with a 78.4% accuracy score, exceeding the current state-of-the-art score by 2.8 points on the NExTQA task.<br />Comment: presented at MIRU2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.07748
Document Type :
Working Paper