Back to Search Start Over

Disambiguation-Free Partial Label Learning.

Authors :
Zhang, Min-Ling
Yu, Fei
Tang, Cai-Zhi
Source :
IEEE Transactions on Knowledge & Data Engineering. Oct2017, Vol. 29 Issue 10, p2155-2167. 13p.
Publication Year :
2017

Abstract

In partial label learning, each training example is associated with a set of candidate labels among which only one is the ground-truth label. The common strategy to induce predictive model is trying to disambiguate the candidate label set, i.e., differentiating the modeling outputs of individual candidate labels. Specifically, disambiguation by differentiation can be conducted either by identifying the ground-truth label iteratively or by treating each candidate label equally. Nonetheless, the disambiguation strategy is prone to be misled by the false positive labels co-occurring with ground-truth label. In this paper, a new partial label learning strategy is studied which refrains from conducting disambiguation. Specifically, by adapting error-correcting output codes (ECOC), a simple yet effective approach named <sc>Pl-ecoc</sc> is proposed by utilizing candidate label set as an entirety. During training phase, to build binary classifier w.r.t. each column coding, any partially labeled example will be regarded as a positive or negative training example only if its candidate label set entirely falls into the coding dichotomy. During testing phase, class label for the unseen instance is determined via loss-based decoding which considers binary classifiers’ empirical performance and predictive margin. Extensive experiments show that <sc>Pl-ecoc</sc> performs favorably against state-of-the-art partial label learning approaches. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
10414347
Volume :
29
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Knowledge & Data Engineering
Publication Type :
Academic Journal
Accession number :
125187478
Full Text :
https://doi.org/10.1109/TKDE.2017.2721942