Back to Search Start Over

End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding

Authors :
Li, Mengze
Wang, Tianbao
Zhang, Haoyu
Zhang, Shengyu
Zhao, Zhou
Miao, Jiaxu
Zhang, Wenqiao
Tan, Wenming
Wang, Jin
Wang, Peng
Pu, Shiliang
Wu, Fei
Publication Year :
2022

Abstract

Natural language spatial video grounding aims to detect the relevant objects in video frames with descriptive sentences as the query. In spite of the great advances, most existing methods rely on dense video frame annotations, which require a tremendous amount of human effort. To achieve effective grounding under a limited annotation budget, we investigate one-shot video grounding, and learn to ground natural language in all video frames with solely one frame labeled, in an end-to-end manner. One major challenge of end-to-end one-shot video grounding is the existence of videos frames that are either irrelevant to the language query or the labeled frames. Another challenge relates to the limited supervision, which might result in ineffective representation learning. To address these challenges, we designed an end-to-end model via Information Tree for One-Shot video grounding (IT-OS). Its key module, the information tree, can eliminate the interference of irrelevant frames based on branch search and branch cropping techniques. In addition, several self-supervised tasks are proposed based on the information tree to improve the representation learning under insufficient labeling. Experiments on the benchmark dataset demonstrate the effectiveness of our model.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.08013
Document Type :
Working Paper