Back to Search Start Over

Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics

Authors :
Rangwani, Harsh
Ramasubramanian, Shrinivas
Takemori, Sho
Takashi, Kato
Umeda, Yuhei
Radhakrishnan, Venkatesh Babu
Publication Year :
2023

Abstract

Self-training based semi-supervised learning algorithms have enabled the learning of highly accurate deep neural networks, using only a fraction of labeled data. However, the majority of work on self-training has focused on the objective of improving accuracy, whereas practical machine learning systems can have complex goals (e.g. maximizing the minimum of recall across classes, etc.) that are non-decomposable in nature. In this work, we introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics. We prove that our framework can better optimize the desired non-decomposable metric utilizing unlabeled data, under similar data distribution assumptions made for the analysis of self-training. Using the proposed CSST framework, we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks. Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives.<br />Comment: NeurIPS 2022. Code: https://github.com/val-iisc/CostSensitiveSelfTraining

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.14738
Document Type :
Working Paper