1. Fairness-Aware Unsupervised Feature Selection
- Author
-
Jundong Li, Chen Chen, Xiaoying Xing, and Hongfu Liu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer science ,business.industry ,Feature vector ,Feature selection ,Debiasing ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Kernel alignment ,Artificial Intelligence (cs.AI) ,Feature (computer vision) ,Unsupervised learning ,Data pre-processing ,Artificial intelligence ,business ,computer - Abstract
Feature selection is a prevalent data preprocessing paradigm for various learning tasks. Due to the expensive cost of acquiring supervision information, unsupervised feature selection sparks great interests recently. However, existing unsupervised feature selection algorithms do not have fairness considerations and suffer from a high risk of amplifying discrimination by selecting features that are over associated with protected attributes such as gender, race, and ethnicity. In this paper, we make an initial investigation of the fairness-aware unsupervised feature selection problem and develop a principled framework, which leverages kernel alignment to find a subset of high-quality features that can best preserve the information in the original feature space while being minimally correlated with protected attributes. Specifically, different from the mainstream in-processing debiasing methods, our proposed framework can be regarded as a model-agnostic debiasing strategy that eliminates biases and discrimination before downstream learning algorithms are involved. Experimental results on multiple real-world datasets demonstrate that our framework achieves a good trade-off between utility maximization and fairness promotion.
- Published
- 2021
- Full Text
- View/download PDF