Back to Search Start Over

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

Authors :
Bothmann, Ludwig
Dandl, Susanne
Schomaker, Michael
Source :
Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023), CEUR Workshop Proceedings, https://ceur-ws.org/Vol-3523/
Publication Year :
2023

Abstract

A decision can be defined as fair if equal individuals are treated equally and unequals unequally. Adopting this definition, the task of designing machine learning (ML) models that mitigate unfairness in automated decision-making systems must include causal thinking when introducing protected attributes: Following a recent proposal, we define individuals as being normatively equal if they are equal in a fictitious, normatively desired (FiND) world, where the protected attributes have no (direct or indirect) causal effect on the target. We propose rank-preserving interventional distributions to define a specific FiND world in which this holds and a warping method for estimation. Evaluation criteria for both the method and the resulting ML model are presented and validated through simulations. Experiments on empirical data showcase the practical application of our method and compare results with "fairadapt" (Ple\v{c}ko and Meinshausen, 2020), a different approach for mitigating unfairness by causally preprocessing data that uses quantile regression forests. With this, we show that our warping approach effectively identifies the most discriminated individuals and mitigates unfairness.

Details

Database :
arXiv
Journal :
Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023), CEUR Workshop Proceedings, https://ceur-ws.org/Vol-3523/
Publication Type :
Report
Accession number :
edsarx.2307.12797
Document Type :
Working Paper