Back to Search Start Over

Few-Shot Action Localization without Knowing Boundaries

Authors :
Christos Tzelepis
Tingting Xie
Ioannis Patras
Fan Fu
Source :
ICMR
Publication Year :
2021
Publisher :
ACM, 2021.

Abstract

Learning to localize actions in long, cluttered, and untrimmed videos is a hard task, that in the literature has typically been addressed assuming the availability of large amounts of annotated training samples for each class -- either in a fully-supervised setting, where action boundaries are known, or in a weakly-supervised setting, where only class labels are known for each video. In this paper, we go a step further and show that it is possible to learn to localize actions in untrimmed videos when a) only one/few trimmed examples of the target action are available at test time, and b) when a large collection of videos with only class label annotation (some trimmed and some weakly annotated untrimmed ones) are available for training; with no overlap between the classes used during training and testing. To do so, we propose a network that learns to estimate Temporal Similarity Matrices (TSMs) that model a fine-grained similarity pattern between pairs of videos (trimmed or untrimmed), and uses them to generate Temporal Class Activation Maps (TCAMs) for seen or unseen classes. The TCAMs serve as temporal attention mechanisms to extract video-level representations of untrimmed videos, and to temporally localize actions at test time. To the best of our knowledge, we are the first to propose a weakly-supervised, one/few-shot action localization network that can be trained in an end-to-end fashion. Experimental results on THUMOS14 and ActivityNet1.2 datasets, show that our method achieves performance comparable or better to state-of-the-art fully-supervised, few-shot learning methods.<br />Comment: ICMR21 Camera ready; link to code: https://github.com/June01/WFSAL-icmr21

Details

Database :
OpenAIRE
Journal :
Proceedings of the 2021 International Conference on Multimedia Retrieval
Accession number :
edsair.doi.dedup.....b324789d9e6d49d6c4a89a9053372df8
Full Text :
https://doi.org/10.1145/3460426.3463643