Back to Search Start Over

Bayes-Optimal Classifiers under Group Fairness

Authors :
Zeng, Xianli
Dobriban, Edgar
Cheng, Guang
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

Machine learning algorithms are becoming integrated into more and more high-stakes decision-making processes, such as in social welfare issues. Due to the need of mitigating the potentially disparate impacts from algorithmic predictions, many approaches have been proposed in the emerging area of fair machine learning. However, the fundamental problem of characterizing Bayes-optimal classifiers under various group fairness constraints has only been investigated in some special cases. Based on the classical Neyman-Pearson argument (Neyman and Pearson, 1933; Shao, 2003) for optimal hypothesis testing, this paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness. This enables us to propose a group-based thresholding method we call FairBayes, that can directly control disparity, and achieve an essentially optimal fairness-accuracy tradeoff. These advantages are supported by thorough experiments.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....83fed6235871fc00c604d12fc6addbb3
Full Text :
https://doi.org/10.48550/arxiv.2202.09724