Back to Search Start Over

What Visual Attributes Characterize an Object Class ?

Authors :
Hanqing Lu
Yong Rui
Xin-Jing Wang
Jianlong Fu
Jinqiao Wang
Source :
Computer Vision – ACCV 2014 ISBN: 9783319168647, ACCV (1)
Publication Year :
2014

Abstract

Visual attribute-based learning has shown a big impact on many computer vision problems in recent years. Albeit its usefulness, most of works only focus on predicting either the presence or the strength of pre-defined attributes. In this paper, we discuss how to automatically learn visual attributes that characterize an object class. Starting from the images of an object class that are collected from the Web, we first mine visual prototypes of attributes (i.e., a clean intermediate representation for learning attributes) by clustering with Gaussian mixtures from multi-scale salient areas in noisy Web images. Second, a joint optimization model is proposed to fulfill the attribute learning with feature selection. As sparse approximation is adopted for feature selection during the joint optimization, the learned attributes tend to present a more representative visual property, e.g., stripe pattern (when texture features are selected), yellow-color (when color features are selected). Finally, to quantify the confidence of attributes and restrain the noisy attributes learned from the Web, a ranking-based method is proposed to refine the learned attributes. Our approach ensures the learned visual attributes to be visually recognizable and representative, in contrast to manually constructed attributes [1] that contain properties difficult to be visualized, e.g., “smelly,” “smart.” We evaluated our approach on two benchmark datasets, and compared with state-of-the-art approaches in two aspects: the quality of the learned visual attributes and their effectiveness in object categorization.

Details

Language :
English
ISBN :
978-3-319-16864-7
ISBNs :
9783319168647
Database :
OpenAIRE
Journal :
Computer Vision – ACCV 2014 ISBN: 9783319168647, ACCV (1)
Accession number :
edsair.doi.dedup.....23363906fed128268d234a570f53bb68
Full Text :
https://doi.org/10.13140/2.1.3017.2167