Back to Search Start Over

Eye Fixation Assisted Video Saliency Detection via Total Variation-Based Pairwise Interaction.

Authors :
Qiu, Wenliang
Gao, Xinbo
Han, Bing
Source :
IEEE Transactions on Image Processing; Oct2018, Vol. 27 Issue 10, p4724-4739, 16p
Publication Year :
2018

Abstract

As human visual attention is naturally biased toward foreground objects in a scene, it can be used to extract salient objects in video clips. In this paper, we proposed a weakly supervised learning-based video saliency detection algorithm utilizing eye fixations information from multiple subjects. Our main idea is to extend eye fixations to saliency regions step by step. First, visual seeds are collected using multiple color space geodesic distance-based seed region mapping with filtered and extended eye fixations. This operation helps raw fixation points spread to the most likely salient regions, namely, visual seed regions. Second, in order to seize the essential scene structure from video sequences, we introduce the total variance-based pairwise interaction model to learn the potential pairwise relationship between foreground and background within a frame or across video frames. In this vein, visual seed regions eventually grow into salient regions. Compared with previous approaches the generated saliency maps have two most outstanding properties: $integrity$ and $purity$ , which are conductive to segment the foreground and significant to the follow-up tasks. Extensive quantitative and qualitative experiments on various video sequences demonstrate that the proposed method outperforms the state-of-the-art image and video saliency detection algorithms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
27
Issue :
10
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
130410650
Full Text :
https://doi.org/10.1109/TIP.2018.2843680