Back to Search Start Over

Confound-leakage: Confound Removal in Machine Learning Leads to Leakage

Authors :
Hamdan, Sami
Love, Bradley C.
von Polier, Georg G.
Weis, Susanne
Schwender, Holger
Eickhoff, Simon B.
Patil, Kaustubh R.
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

Machine learning (ML) approaches to data analysis are now widely adopted in many fields including epidemiology and medicine. To apply these approaches, confounds must first be removed as is commonly done by featurewise removal of their variance by linear regression before applying ML. Here, we show this common approach to confound removal biases ML models, leading to misleading results. Specifically, this common deconfounding approach can leak information such that what are null or moderate effects become amplified to near-perfect prediction when nonlinear ML approaches are subsequently applied. We identify and evaluate possible mechanisms for such confound-leakage and provide practical guidance to mitigate its negative impact. We demonstrate the real-world importance of confound-leakage by analyzing a clinical dataset where accuracy is overestimated for predicting attention deficit hyperactivity disorder (ADHD) with depression as a confound. Our results have wide-reaching implications for implementation and deployment of ML workflows and beg caution against na\"ive use of standard confound removal approaches.<br />Comment: Revised Introduction, added CoI, results unchanged

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....9c7457c285923a9242a22ce2da9fd3b8
Full Text :
https://doi.org/10.48550/arxiv.2210.09232