Back to Search Start Over

Synthesizing CT from Ultrashort Echo-Time MR Images via Convolutional Neural Networks

Authors :
Dzung L. Pham
John A. Butman
Snehashis Roy
Source :
Simulation and Synthesis in Medical Imaging ISBN: 9783319681269
Publication Year :
2017
Publisher :
Springer International Publishing, 2017.

Abstract

With the increasing popularity of PET-MR scanners in clinical applications, synthesis of CT images from MR has been an important research topic. Accurate PET image reconstruction requires attenuation correction, which is based on the electron density of tissues and can be obtained from CT images. While CT measures electron density information for x-ray photons, MR images convey information about the magnetic properties of tissues. Therefore, with the advent of PET-MR systems, the attenuation coefficients need to be indirectly estimated from MR images. In this paper, we propose a fully convolutional neural network (CNN) based method to synthesize head CT from ultra-short echo-time (UTE) dual-echo MR images. Unlike traditional $T_1$-w images which do not have any bone signal, UTE images show some signal for bone, which makes it a good candidate for MR to CT synthesis. A notable advantage of our approach is that accurate results were achieved with a small training data set. Using an atlas of a single CT and dual-echo UTE pair, we train a deep neural network model to learn the transform of MR intensities to CT using patches. We compared our CNN based model with a state-of-the-art registration based as well as a Bayesian model based CT synthesis method, and showed that the proposed CNN model outperforms both of them. We also compared the proposed model when only $T_1$-w images are available instead of UTE, and show that UTE images produce better synthesis than using just $T_1$-w images.

Details

ISBN :
978-3-319-68126-9
ISBNs :
9783319681269
Database :
OpenAIRE
Journal :
Simulation and Synthesis in Medical Imaging ISBN: 9783319681269
Accession number :
edsair.doi...........fae31fd0d46d50bb3343eefe9653e5e0
Full Text :
https://doi.org/10.1007/978-3-319-68127-6_3