1. BMFT: Achieving Fairness via Bias-based Weight Masking Fine-tuning
- Author
-
Xue, Yuyang, Yan, Junyu, Dutt, Raman, Haider, Fasih, Liu, Jingshuai, McDonagh, Steven, and Tsaftaris, Sotirios A.
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Developing models with robust group fairness properties is paramount, particularly in ethically sensitive domains such as medical diagnosis. Recent approaches to achieving fairness in machine learning require a substantial amount of training data and depend on model retraining, which may not be practical in real-world scenarios. To mitigate these challenges, we propose Bias-based Weight Masking Fine-Tuning (BMFT), a novel post-processing method that enhances the fairness of a trained model in significantly fewer epochs without requiring access to the original training data. BMFT produces a mask over model parameters, which efficiently identifies the weights contributing the most towards biased predictions. Furthermore, we propose a two-step debiasing strategy, wherein the feature extractor undergoes initial fine-tuning on the identified bias-influenced weights, succeeded by a fine-tuning phase on a reinitialised classification layer to uphold discriminative performance. Extensive experiments across four dermatological datasets and two sensitive attributes demonstrate that BMFT outperforms existing state-of-the-art (SOTA) techniques in both diagnostic accuracy and fairness metrics. Our findings underscore the efficacy and robustness of BMFT in advancing fairness across various out-of-distribution (OOD) settings. Our code is available at: https://github.com/vios-s/BMFT, Comment: Accepted by MICCAI 2024 FAIMI Workshop Oral
- Published
- 2024
- Full Text
- View/download PDF