Back to Search Start Over

Position-aware Location Regression Network for Temporal Video Grounding

Authors :
Kim, Sunoh
Yun, Kimin
Choi, Jin Young
Publication Year :
2022

Abstract

The key to successful grounding for video surveillance is to understand a semantic phrase corresponding to important actors and objects. Conventional methods ignore comprehensive contexts for the phrase or require heavy computation for multiple phrases. To understand comprehensive contexts with only one semantic phrase, we propose Position-aware Location Regression Network (PLRN) which exploits position-aware features of a query and a video. Specifically, PLRN first encodes both the video and query using positional information of words and video segments. Then, a semantic phrase feature is extracted from an encoded query with attention. The semantic phrase feature and encoded video are merged and made into a context-aware feature by reflecting local and global contexts. Finally, PLRN predicts start, end, center, and width values of a grounding boundary. Our experiments show that PLRN achieves competitive performance over existing methods with less computation time and memory.<br />Comment: Accepted in AVSS 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.05499
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/AVSS52988.2021.9663815