1. Rethinking the Video Sampling and Reasoning Strategies for Temporal Sentence Grounding
- Author
-
Zhu, Jiahao, Liu, Daizong, Zhou, Pan, Di, Xing, Cheng, Yu, Yang, Song, Xu, Wenzheng, Xu, Zichuan, Wan, Yao, Sun, Lichao, and Xiong, Zeyu
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets., Comment: Accepted by EMNLP Findings, 2022
- Published
- 2023