Back to Search Start Over

SCAN: Learning to Classify Images without Labels

Authors :
Stamatios Georgoulis
Wouter Van Gansbeke
Simon Vandenhende
Luc Van Gool
Marc Proesmans
Source :
Computer Vision – ECCV 2020 ISBN: 9783030586065, ECCV (10)
Publication Year :
2020
Publisher :
arXiv, 2020.

Abstract

Can we automatically group images into semantically meaningful clusters when ground-truth annotations are absent? The task of unsupervised image classification remains an important, and open challenge in computer vision. Several recent approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on low-level features, which is present in current end-to-end learning approaches. Experimental evaluation shows that we outperform state-of-the-art methods by large margins, in particular +26.6% on CIFAR10, +25.0% on CIFAR100-20 and +21.3% on STL10 in terms of classification accuracy. Furthermore, our method is the first to perform well on a large-scale dataset for image classification. In particular, we obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime without the use of any ground-truth annotations. The code is made publicly available at https://github.com/wvangansbeke/Unsupervised-Classification.<br />Comment: Accepted at ECCV 2020. Includes supplementary. Code and pretrained models at https://github.com/wvangansbeke/Unsupervised-Classification

Details

ISBN :
978-3-030-58606-5
ISBNs :
9783030586065
Database :
OpenAIRE
Journal :
Computer Vision – ECCV 2020 ISBN: 9783030586065, ECCV (10)
Accession number :
edsair.doi.dedup.....f2e6bbd37f0c4ca4a9a57064f6875512
Full Text :
https://doi.org/10.48550/arxiv.2005.12320