751. Multimodal graph neural network for video procedural captioning.
- Author
-
Ji, Lei, Tu, Rongcheng, Lin, Kevin, Wang, Lijuan, and Duan, Nan
- Subjects
- *
INSTRUCTIONAL films , *VIDEOS - Abstract
Video procedural captioning aims to generate detailed descriptive captions for all steps in a long instructional video. The peculiarity of this problem is the procedural dependency between the events to generate consistent captions among the video. However, existing video (dense) captioning methods only consider intra-event or sequential inter-event context and are hard to model the non-sequential context dependency between events. In this paper, inspired by the recent success of graph neural networks in capturing the relations for structured data, we propose a novel M ultimodal G raph N eural N etwork (MGNN) for dense video procedural captioning in capturing the procedural structure between events. Specifically, we construct temporal sequential graph and semantic non-sequential graph for a multimodal heterogeneous graph. Moreover, we adopt the graph neural network to enhance the visual and text features, and fuse both features for further caption generation. Extensive experiments demonstrate the proposed MGNN is effective in generating coherent captions on both the Youcook2 and Activitynet Captions benchmark. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF