Back to Search Start Over

EVOQUER: Enhancing Temporal Grounding with Video-Pivoted BackQuery Generation

Authors :
Gao, Yanjun
Liu, Lulu
Wang, Jason
Chen, Xin
Wang, Huayan
Zhang, Rui
Publication Year :
2021

Abstract

Temporal grounding aims to predict a time interval of a video clip corresponding to a natural language query input. In this work, we present EVOQUER, a temporal grounding framework incorporating an existing text-to-video grounding model and a video-assisted query generation network. Given a query and an untrimmed video, the temporal grounding model predicts the target interval, and the predicted video clip is fed into a video translation task by generating a simplified version of the input query. EVOQUER forms closed-loop learning by incorporating loss functions from both temporal grounding and query generation serving as feedback. Our experiments on two widely used datasets, Charades-STA and ActivityNet, show that EVOQUER achieves promising improvements by 1.05 and 1.31 at R@0.7. We also discuss how the query generation task could facilitate error analysis by explaining temporal grounding model behavior.<br />Comment: Accepted by Visually Grounded Interaction and Language (ViGIL) Workshop at NAACL 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.04600
Document Type :
Working Paper