Back to Search Start Over

Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization

Authors :
Doerrich, Sebastian
Di Salvo, Francesco
Ledig, Christian
Publication Year :
2024

Abstract

Despite notable advancements, the integration of deep learning (DL) techniques into impactful clinical applications, particularly in the realm of digital histopathology, has been hindered by challenges associated with achieving robust generalization across diverse imaging domains and characteristics. Traditional mitigation strategies in this field such as data augmentation and stain color normalization have proven insufficient in addressing this limitation, necessitating the exploration of alternative methodologies. To this end, we propose a novel generative method for domain generalization in histopathology images. Our method employs a generative, self-supervised Vision Transformer to dynamically extract characteristics of image patches and seamlessly infuse them into the original images, thereby creating novel, synthetic images with diverse attributes. By enriching the dataset with such synthesized images, we aim to enhance its holistic nature, facilitating improved generalization of DL models to unseen domains. Extensive experiments conducted on two distinct histopathology datasets demonstrate the effectiveness of our proposed approach, outperforming the state of the art substantially, on the Camelyon17-wilds challenge dataset (+2%) and on a second epithelium-stroma dataset (+26%). Furthermore, we emphasize our method's ability to readily scale with increasingly available unlabeled data samples and more complex, higher parametric architectures. Source code is available at https://github.com/sdoerrich97/vits-are-generative-models .<br />Comment: Accepted at MICCAI 2024. This is the submitted manuscript with added link to github repo and funding acknowledgements. No further post submission improvements or corrections were integrated. Final version not published yet

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.02900
Document Type :
Working Paper