Back to Search Start Over

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

Authors :
Dooley, Samuel
Sukthanker, Rhea Sanjay
Dickerson, John P.
White, Colin
Hutter, Frank
Goldblum, Micah
Publication Year :
2022

Abstract

Face recognition systems are widely deployed in safety-critical applications, including law enforcement, yet they exhibit bias across a range of socio-demographic dimensions, such as gender and race. Conventional wisdom dictates that model biases arise from biased training data. As a consequence, previous works on bias mitigation largely focused on pre-processing the training data, adding penalties to prevent bias from effecting the model during training, or post-processing predictions to debias them, yet these approaches have shown limited success on hard problems such as face recognition. In our work, we discover that biases are actually inherent to neural network architectures themselves. Following this reframing, we conduct the first neural architecture search for fairness, jointly with a search for hyperparameters. Our search outputs a suite of models which Pareto-dominate all other high-performance architectures and existing bias mitigation methods in terms of accuracy and fairness, often by large margins, on the two most widely used datasets for face identification, CelebA and VGGFace2. Furthermore, these models generalize to other datasets and sensitive attributes. We release our code, models and raw data files at https://github.com/dooleys/FR-NAS.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.09943
Document Type :
Working Paper