1. Global and Compact Video Context Embedding for Video Semantic Segmentation
- Author
-
Lei Sun, Yun Liu, Guolei Sun, Min Wu, Zhijie Xu, Kaiwei Wang, and Luc van Gool
- Subjects
Video semantic segmentation ,global video context ,compact video context ,video context embedding ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Intuitively, global video context could benefit video semantic segmentation (VSS) if it is designed to simultaneously model global temporal and spatial dependencies for a holistic understanding of the semantic scenes in a video clip. However, we found that the existing VSS approaches focus only on modeling local video context. This paper attempts to bridge this gap by learning global video context for VSS. Apart from the global nature, the video context should also be compact when considering the large number of video feature tokens and the redundancy among nearby video frames. Then, we embed the learned global and compact video context into the features of the target video frame to improve the distinguishability. The proposed VSS method is dubbed Global and Compact Video Context Embedding (GCVCE). With the compact nature, the number of global context tokens is very limited so that GCVCE is flexible and efficient for VSS. Since it may be too challenging to directly abstract a large number of video feature tokens into a small number of global context tokens, we further design a Cascaded Convolutional Downsampling (CCD) module before GCVCE to help it work better. 1.6% improvement in mIoU on the popular VSPW dataset compared to previous state-of-the-art methods demonstrate the effectiveness and efficiency of GCVCE and CCD for VSS. Code and models will be made publicly available.
- Published
- 2024
- Full Text
- View/download PDF