Back to Search Start Over

ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration

Authors :
Dey, Neel
Schlemper, Jo
Salehi, Seyed Sadegh Mohseni
Zhou, Bo
Gerig, Guido
Sofka, Michal
Publication Year :
2022

Abstract

Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.<br />Comment: Accepted by MICCAI 2022. 13 pages, 6 figures, and 1 table

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.13434
Document Type :
Working Paper