Back to Search Start Over

Learning to map 2D ultrasound images into 3D space with minimal human annotation.

Authors :
Yeung, Pak-Hei
Aliasi, Moska
Papageorghiou, Aris T.
Haak, Monique
Xie, Weidi
Namburete, Ana I.L.
Source :
Medical Image Analysis. May2021, Vol. 70, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

• We propose a CNN for predicting the 3D location of 2D fetal brain images. • The images tested can be acquired from standard or oblique planes. • Training of the CNN only requires minimal human annotation. • Our proposed model outperforms a baseline CNN for plane localization. • It is applicable to 2D freehand ultrasound images and video scanning sequences. [Display omitted] In fetal neurosonography, aligning two-dimensional (2D) ultrasound scans to their corresponding plane in the three-dimensional (3D) space remains a challenging task. In this paper, we propose a convolutional neural network that predicts the position of 2D ultrasound fetal brain scans in 3D atlas space. Instead of purely supervised learning that requires heavy annotations for each 2D scan, we train the model by sampling 2D slices from 3D fetal brain volumes, and target the model to predict the inverse of the sampling process, resembling the idea of self-supervised learning. We propose a model that takes a set of images as input, and learns to compare them in pairs. The pairwise comparison is weighted by the attention module based on its contribution to the prediction, which is learnt implicitly during training. The feature representation for each image is thus computed by incorporating the relative position information to all the other images in the set, and is later used for the final prediction. We benchmark our model on 2D slices sampled from 3D fetal brain volumes at 18–22 weeks' gestational age. Using three evaluation metrics, namely, Euclidean distance, plane angles and normalized cross correlation, which account for both the geometric and appearance discrepancy between the ground-truth and prediction, in all these metrics, our model outperforms a baseline model by as much as 23%, when the number of input images increases. We further demonstrate that our model generalizes to (i) real 2D standard transthalamic plane images, achieving comparable performance as human annotations, as well as (ii) video sequences of 2D freehand fetal brain scans. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13618415
Volume :
70
Database :
Academic Search Index
Journal :
Medical Image Analysis
Publication Type :
Academic Journal
Accession number :
149713112
Full Text :
https://doi.org/10.1016/j.media.2021.101998