Back to Search Start Over

Learning a Combined Model of Visual Saliency for Fixation Prediction.

Authors :
Wang, Jingwei
Borji, Ali
Jay Kuo, C.-C.
Itti, Laurent
Source :
IEEE Transactions on Image Processing. Apr2016, Vol. 25 Issue 4, p1566-1579. 14p.
Publication Year :
2016

Abstract

A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
10577149
Volume :
25
Issue :
4
Database :
Academic Search Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
113293488
Full Text :
https://doi.org/10.1109/TIP.2016.2522380