1. Text-driven Online Action Detection
- Author
-
Benavent-Lledo, Manuel, Mulero-Pérez, David, Ortiz-Perez, David, and Garcia-Rodriguez, Jose
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Detecting actions as they occur is essential for applications like video surveillance, autonomous driving, and human-robot interaction. Known as online action detection, this task requires classifying actions in streaming videos, handling background noise, and coping with incomplete actions. Transformer architectures are the current state-of-the-art, yet the potential of recent advancements in computer vision, particularly vision-language models (VLMs), remains largely untapped for this problem, partly due to high computational costs. In this paper, we introduce TOAD: a Text-driven Online Action Detection architecture that supports zero-shot and few-shot learning. TOAD leverages CLIP (Contrastive Language-Image Pretraining) textual embeddings, enabling efficient use of VLMs without significant computational overhead. Our model achieves 82.46% mAP on the THUMOS14 dataset, outperforming existing methods, and sets new baselines for zero-shot and few-shot performance on the THUMOS14 and TVSeries datasets., Comment: Published in Integrated Computer-Aided Engineering
- Published
- 2025
- Full Text
- View/download PDF