Back to Search Start Over

Quantifying and mitigating the impact of label errors on model disparity metrics

Authors :
Adebayo, Julius
Hall, Melissa
Yu, Bowen
Chern, Bobbie
Publication Year :
2023

Abstract

Errors in labels obtained via human annotation adversely affect a model's performance. Existing approaches propose ways to mitigate the effect of label error on a model's downstream accuracy, yet little is known about its impact on a model's disparity metrics. Here we study the effect of label error on a model's disparity metrics. We empirically characterize how varying levels of label error, in both training and test data, affect these disparity metrics. We find that group calibration and other metrics are sensitive to train-time and test-time label error -- particularly for minority groups. This disparate effect persists even for models trained with noise-aware algorithms. To mitigate the impact of training-time label error, we present an approach to estimate the influence of a training input's label on a model's group disparity metric. We empirically assess the proposed approach on a variety of datasets and find significant improvement, compared to alternative approaches, in identifying training inputs that improve a model's disparity metric. We complement the approach with an automatic relabel-and-finetune scheme that produces updated models with, provably, improved group calibration error.<br />Comment: Conference paper at ICLR 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.02533
Document Type :
Working Paper