Back to Search Start Over

iFlipper: Label Flipping for Individual Fairness

Authors :
Zhang, Hantian
Tae, Ki Hyun
Park, Jaeyoung
Chu, Xu
Whang, Steven Euijong
Publication Year :
2022

Abstract

As machine learning becomes prevalent, mitigating any unfairness present in the training data becomes critical. Among the various notions of fairness, this paper focuses on the well-known individual fairness, which states that similar individuals should be treated similarly. While individual fairness can be improved when training a model (in-processing), we contend that fixing the data before model training (pre-processing) is a more fundamental solution. In particular, we show that label flipping is an effective pre-processing technique for improving individual fairness. Our system iFlipper solves the optimization problem of minimally flipping labels given a limit to the individual fairness violations, where a violation occurs when two similar examples in the training data have different labels. We first prove that the problem is NP-hard. We then propose an approximate linear programming algorithm and provide theoretical guarantees on how close its result is to the optimal solution in terms of the number of label flips. We also propose techniques for making the linear programming solution more optimal without exceeding the violations limit. Experiments on real datasets show that iFlipper significantly outperforms other pre-processing baselines in terms of individual fairness and accuracy on unseen test sets. In addition, iFlipper can be combined with in-processing techniques for even better results.<br />Comment: 20 pages, 19 figures, 8 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.07047
Document Type :
Working Paper