Back to Search Start Over

RESMatch: Referring Expression Segmentation in a Semi-Supervised Manner

Authors :
Zang, Ying
Fu, Chenglong
Cao, Runlong
Zhu, Didi
Zhang, Min
Hu, Wenjun
Zhu, Lanyun
Chen, Tianrun
Publication Year :
2024

Abstract

Referring expression segmentation (RES), a task that involves localizing specific instance-level objects based on free-form linguistic descriptions, has emerged as a crucial frontier in human-AI interaction. It demands an intricate understanding of both visual and textual contexts and often requires extensive training data. This paper introduces RESMatch, the first semi-supervised learning (SSL) approach for RES, aimed at reducing reliance on exhaustive data annotation. Extensive validation on multiple RES datasets demonstrates that RESMatch significantly outperforms baseline approaches, establishing a new state-of-the-art. Although existing SSL techniques are effective in image segmentation, we find that they fall short in RES. Facing the challenges including the comprehension of free-form linguistic descriptions and the variability in object attributes, RESMatch introduces a trifecta of adaptations: revised strong perturbation, text augmentation, and adjustments for pseudo-label quality and strong-weak supervision. This pioneering work lays the groundwork for future research in semi-supervised learning for referring expression segmentation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.05589
Document Type :
Working Paper