1. TWLV-I: Analysis and Insights from Holistic Evaluation on Video Foundation Models
- Author
-
Lee, Hyeongmin, Kim, Jin-Young, Baek, Kyungjune, Kim, Jihwan, Go, Hyojun, Ha, Seongsu, Han, Seokjin, Jang, Jiho, Jung, Raehyuk, Kim, Daewoo, Kim, GeunOh, Kim, JongMok, Kim, Jongseok, Kim, Junwan, Kwon, Soonwoo, Lee, Jangwon, Park, Seungjoon, Seo, Minjoon, Suh, Jay, Yi, Jaehyuk, and Lee, Aiden
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In this work, we discuss evaluating video foundation models in a fair and robust manner. Unlike language or image foundation models, many video foundation models are evaluated with differing parameters (such as sampling rate, number of frames, pretraining steps, etc.), making fair and robust comparisons challenging. Therefore, we present a carefully designed evaluation framework for measuring two core capabilities of video comprehension: appearance and motion understanding. Our findings reveal that existing video foundation models, whether text-supervised like UMT or InternVideo2, or self-supervised like V-JEPA, exhibit limitations in at least one of these capabilities. As an alternative, we introduce TWLV-I, a new video foundation model that constructs robust visual representations for both motion- and appearance-based videos. Based on the average top-1 accuracy of linear probing on five action recognition benchmarks, pretrained only on publicly accessible datasets, our model shows a 4.6%p improvement compared to V-JEPA (ViT-L) and a 7.7%p improvement compared to UMT (ViT-L). Even when compared to much larger models, our model demonstrates a 7.2%p improvement compared to DFN (ViT-H), a 2.7%p improvement compared to V-JEPA (ViT-H) and a 2.8%p improvement compared to InternVideo2 (ViT-g). We provide embedding vectors obtained by TWLV-I from videos of several commonly used video benchmarks, along with evaluation source code that can directly utilize these embeddings. The code is available at https://github.com/twelvelabs-io/video-embeddings-evaluation-framework., Comment: 17 pages; Twelve Labs Technical Report
- Published
- 2024