Back to Search Start Over

A Survey of Object Co-Segmentation

Authors :
Zhoumin Lu
Genggeng Liu
Haiping Xu
Source :
IEEE Access, Vol 7, Pp 62875-62893 (2019)
Publication Year :
2019
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2019.

Abstract

It is widely acknowledged that object segmentation is a significant research field for computer vision and a key process for many other visual tasks. In the past unsupervised single-image segmentation, there are often cases where the segmentation result is not good. In the current supervised single-image segmentation, it is necessary to rely on a large number of data annotations and long-term training of the model. Then, people attempted to segment simultaneously the common regions from multiple images. On the one hand, it does not need to use a large amount of labeled data to train in advance. On the other hand, it utilizes the consistency constraint between images to better obtain the object information. This idea can generate better performance than the traditional one did, resulting in many methods related to object co-segmentation. This paper reviews some classic and effective object co-segmentation methods, including saliency-based approaches, joint-processing-based approaches, graph-based approaches, and others. For different methods, we select two or three related models to elaborate, such as a model based on random walks. Moreover, in order to exhibit and evaluate these methods objectively and comprehensively, we not only summarize them in the form of flowcharts and algorithm summaries, but also compare their performance with visualization methods and evaluation metrics, such as intersection-over-union, consistency error, and precision-recall rate. From the experiment, we also attempt to clarify and analyze the existing problems. Finally, we point out the challenges and directions and open new venues for future researchers in the field.

Details

ISSN :
21693536
Volume :
7
Database :
OpenAIRE
Journal :
IEEE Access
Accession number :
edsair.doi.dedup.....8530949305b76df2b6803640077c92e6