Back to Search Start Over

Contrastive Learning for Equivariant Multimodal Image Representations

Authors :
Wetzer, Elisabeth
Pielawski, Nicolas
Breznik, Eva
Öfverstedt, Johan
Lu, Jiahao
Wählby, Carolina
Lindblad, Joakim
Sladoje, Natasa
Wetzer, Elisabeth
Pielawski, Nicolas
Breznik, Eva
Öfverstedt, Johan
Lu, Jiahao
Wählby, Carolina
Lindblad, Joakim
Sladoje, Natasa
Publication Year :
2021

Abstract

Combining the information of different imaging modalities offers complimentary information about the properties of the imaged specimen. Often these modalities need to be captured by different machines, which requires that the resulting images need to be matched and registered in order to map the corresponding signals to each other. This can be a very challenging task due to the varying appearance of the specimen in different sensors. We have recently developed a method which uses contrastive learning to find representations of both modalities, such that the images of different modalities are mapped into the same representational space. The learnt representations (referred to as CoMIRs) are abstract and very similar with respect to a selected similarity measure. There are requirements which these representations need to fulfil for downstream tasks such as registration - e.g rotational equivariance or intensity similarity. We present a hyperparameter free modification of the contrastive loss, which is based on InfoNCE, to produce equivariant, dense-like image representations. These representations are similar enough to be considered in a common space, in which monomodal methods for registration can be exploited.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1312839959
Document Type :
Electronic Resource