1. The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning
- Author
-
Fawkes, Jake, Fishman, Nic, Andrews, Mel, and Lipton, Zachary C.
- Subjects
Computer Science - Machine Learning ,Computer Science - Computers and Society - Abstract
Fairness metrics are a core tool in the fair machine learning literature (FairML), used to determine that ML models are, in some sense, "fair". Real-world data, however, are typically plagued by various measurement biases and other violated assumptions, which can render fairness assessments meaningless. We adapt tools from causal sensitivity analysis to the FairML context, providing a general framework which (1) accommodates effectively any combination of fairness metric and bias that can be posed in the "oblivious setting"; (2) allows researchers to investigate combinations of biases, resulting in non-linear sensitivity; and (3) enables flexible encoding of domain-specific constraints and assumptions. Employing this framework, we analyze the sensitivity of the most common parity metrics under 3 varieties of classifier across 14 canonical fairness datasets. Our analysis reveals the striking fragility of fairness assessments to even minor dataset biases. We show that causal sensitivity analysis provides a powerful and necessary toolkit for gauging the informativeness of parity metric evaluations. Our repository is available here: https://github.com/Jakefawkes/fragile_fair., Comment: Published at Neurips 2024 in the Dataset and Benchmarks Track
- Published
- 2024