Back to Search Start Over

InternVideo2: Scaling Foundation Models for Multimodal Video Understanding

Authors :
Wang, Yi
Li, Kunchang
Li, Xinhao
Yu, Jiashuo
He, Yinan
Wang, Chenting
Chen, Guo
Pei, Baoqi
Yan, Ziang
Zheng, Rongkun
Xu, Jilan
Wang, Zun
Shi, Yansong
Jiang, Tianxiang
Li, Songze
Zhang, Hongjie
Huang, Yifei
Qiao, Yu
Wang, Yali
Wang, Limin
Publication Year :
2024

Abstract

We introduce InternVideo2, a new family of video foundation models (ViFM) that achieve the state-of-the-art results in video recognition, video-text tasks, and video-centric dialogue. Our core design is a progressive training approach that unifies the masked video modeling, crossmodal contrastive learning, and next token prediction, scaling up the video encoder size to 6B parameters. At the data level, we prioritize spatiotemporal consistency by semantically segmenting videos and generating video-audio-speech captions. This improves the alignment between video and text. Through extensive experiments, we validate our designs and demonstrate superior performance on over 60 video and audio tasks. Notably, our model outperforms others on various video-related dialogue and long video understanding benchmarks, highlighting its ability to reason and comprehend longer contexts. Code and models are available at https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2/.<br />Comment: a technical report about video understanding (accepted to ECCV2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.15377
Document Type :
Working Paper