Back to Search Start Over

GroupMixNorm Layer for Learning Fair Models

Authors :
Pandey, Anubha
Rai, Aditi
Singh, Maneet
Bhatt, Deepak
Bhowmik, Tanmoy
Publication Year :
2023

Abstract

Recent research has identified discriminatory behavior of automated prediction algorithms towards groups identified on specific protected attributes (e.g., gender, ethnicity, age group, etc.). When deployed in real-world scenarios, such techniques may demonstrate biased predictions resulting in unfair outcomes. Recent literature has witnessed algorithms for mitigating such biased behavior mostly by adding convex surrogates of fairness metrics such as demographic parity or equalized odds in the loss function, which are often not easy to estimate. This research proposes a novel in-processing based GroupMixNorm layer for mitigating bias from deep learning models. The GroupMixNorm layer probabilistically mixes group-level feature statistics of samples across different groups based on the protected attribute. The proposed method improves upon several fairness metrics with minimal impact on overall accuracy. Analysis on benchmark tabular and image datasets demonstrates the efficacy of the proposed method in achieving state-of-the-art performance. Further, the experimental analysis also suggests the robustness of the GroupMixNorm layer against new protected attributes during inference and its utility in eliminating bias from a pre-trained network.<br />Comment: 12 pages, 6 figures, Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.11969
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/978-3-031-33374-3_41