Back to Search Start Over

CROCODILE: Causality aids RObustness via COntrastive DIsentangled LEarning

Authors :
Carloni, Gianluca
Tsaftaris, Sotirios A
Colantonio, Sara
Publication Year :
2024

Abstract

Due to domain shift, deep learning image classifiers perform poorly when applied to a domain different from the training one. For instance, a classifier trained on chest X-ray (CXR) images from one hospital may not generalize to images from another hospital due to variations in scanner settings or patient characteristics. In this paper, we introduce our CROCODILE framework, showing how tools from causality can foster a model's robustness to domain shift via feature disentanglement, contrastive learning losses, and the injection of prior knowledge. This way, the model relies less on spurious correlations, learns the mechanism bringing from images to prediction better, and outperforms baselines on out-of-distribution (OOD) data. We apply our method to multi-label lung disease classification from CXRs, utilizing over 750000 images from four datasets. Our bias-mitigation method improves domain generalization and fairness, broadening the applicability and reliability of deep learning models for a safer medical image analysis. Find our code at: https://github.com/gianlucarloni/crocodile.<br />Comment: MICCAI 2024 UNSURE Workshop, Accepted for presentation, Submitted Manuscript Version, 10 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.04949
Document Type :
Working Paper