Back to Search
Start Over
Bayes-Optimal Classifiers under Group Fairness
- Publication Year :
- 2022
-
Abstract
- Machine learning algorithms are becoming integrated into more and more high-stakes decision-making processes, such as in social welfare issues. Due to the need of mitigating the potentially disparate impacts from algorithmic predictions, many approaches have been proposed in the emerging area of fair machine learning. However, the fundamental problem of characterizing Bayes-optimal classifiers under various group fairness constraints has only been investigated in some special cases. Based on the classical Neyman-Pearson argument (Neyman and Pearson, 1933; Shao, 2003) for optimal hypothesis testing, this paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness. This enables us to propose a group-based thresholding method we call FairBayes, that can directly control disparity, and achieve an essentially optimal fairness-accuracy tradeoff. These advantages are supported by thorough experiments.<br />Comment: This technical report has been largely superseded by our later paper: "Bayes-Optimal Fair Classification with Linear Disparity Constraints via Pre-, In-, and Post-processing'' (arXiv:2402.02817). Please cite that one instead of this technical report
- Subjects :
- Statistics - Machine Learning
Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2202.09724
- Document Type :
- Working Paper