1. Positive-Unlabeled Learning in the Face of Labeling Bias
- Author
-
Dennis Shasha, Noah Youngs, and Richard Bonneau
- Subjects
Learning classifier system ,Computer science ,business.industry ,Active learning (machine learning) ,Competitive learning ,Supervised learning ,Stability (learning theory) ,Online machine learning ,Multi-task learning ,Pattern recognition ,Semi-supervised learning ,Machine learning ,computer.software_genre ,Ensemble learning ,Generalization error ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Computational learning theory ,Unsupervised learning ,Artificial intelligence ,Instance-based learning ,business ,computer - Abstract
Positive-Unlabeled (PU) learning scenarios are a class of semi-supervised learning where only a fraction of the data is labeled, and all available labels are positive. The goal is to assign correct (positive and negative) labels to as much data as possible. Several important learning problems fall into the PU-learning domain, as in many cases the cost and feasibility of obtaining negative examples is prohibitive. In addition to the positive-negative disparity the overall cost of labeling these datasets typically leads to situations where the number of unlabeled examples greatly outnumbers the labeled. Accordingly, we perform several experiments, on both synthetic and real-world datasets, examining the performance of state of the art PU-learning algorithms when there is significant bias in the labeling process. We propose novel PU algorithms and demonstrate that they outperform the current state of the art on a variety of benchmarks. Lastly, we present a methodology for removing the costly parameter-tuning step in a popular PU algorithm.
- Published
- 2015
- Full Text
- View/download PDF