Back to Search Start Over

Deformable Image Registration Using Vision Transformers for Cardiac Motion Estimation from Cine Cardiac MRI Images.

Authors :
Upendra RR
Simon R
Shontz SM
Linte CA
Source :
Functional imaging and modeling of the heart : ... International Workshop, FIMH ..., proceedings. FIMH [Funct Imaging Model Heart] 2023 Jun; Vol. 13958, pp. 375-383. Date of Electronic Publication: 2023 Jun 16.
Publication Year :
2023

Abstract

Accurate cardiac motion estimation is a crucial step in assessing the kinematic and contractile properties of the cardiac chambers, thereby directly quantifying the regional cardiac function, which plays an important role in understanding myocardial diseases and planning their treatment. Since the cine cardiac magnetic resonance imaging (MRI) provides dynamic, high-resolution 3D images of the heart that depict cardiac motion throughout the cardiac cycle, cardiac motion can be estimated by finding the optical flow representation between the consecutive 3D volumes from a 4D cine cardiac MRI dataset, thereby formulating it as an image registration problem. Therefore, we propose a hybrid convolutional neural network (CNN) and Vision Transformer (ViT) architecture for deformable image registration of 3D cine cardiac MRI images for consistent cardiac motion estimation. We compare the image registration results of our proposed method with those of the VoxelMorph CNN model and conventional B-spline free form deformation (FFD) non-rigid image registration algorithm. We conduct all our experiments on the open-source Automated Cardiac Diagnosis Challenge (ACDC) dataset. Our experiments show that the deformable image registration results obtained using the proposed method outperform the CNN model and the traditional FFD image registration method.

Details

Language :
English
Volume :
13958
Database :
MEDLINE
Journal :
Functional imaging and modeling of the heart : ... International Workshop, FIMH ..., proceedings. FIMH
Publication Type :
Academic Journal
Accession number :
39391840
Full Text :
https://doi.org/10.1007/978-3-031-35302-4_39