Back to Search Start Over

Fairness And Performance In Harmony: Data Debiasing Is All You Need

Authors :
Liu, Junhua
Hui, Wendy Wan Yee
Lee, Roy Ka-Wei
Lim, Kwan Hui
Publication Year :
2024

Abstract

Fairness in both machine learning (ML) predictions and human decisions is critical, with ML models prone to algorithmic and data bias, and human decisions affected by subjectivity and cognitive bias. This study investigates fairness using a real-world university admission dataset with 870 profiles, leveraging three ML models, namely XGB, Bi-LSTM, and KNN. Textual features are encoded with BERT embeddings. For individual fairness, we assess decision consistency among experts with varied backgrounds and ML models, using a consistency score. Results show ML models outperform humans in fairness by 14.08% to 18.79%. For group fairness, we propose a gender-debiasing pipeline and demonstrate its efficacy in removing gender-specific language without compromising prediction performance. Post-debiasing, all models maintain or improve their classification accuracy, validating the hypothesis that fairness and performance can coexist. Our findings highlight ML's potential to enhance fairness in admissions while maintaining high accuracy, advocating a hybrid approach combining human judgement and ML models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.17374
Document Type :
Working Paper