Back to Search Start Over

Estimate and compensate head motion in non‐contrast head CT scans using partial angle reconstruction and deep learning.

Authors :
Chen, Zhennong
Li, Quanzheng
Wu, Dufan
Source :
Medical Physics. May2024, Vol. 51 Issue 5, p3309-3321. 13p.
Publication Year :
2024

Abstract

Background: Patient head motion is a common source of image artifacts in computed tomography (CT) of the head, leading to degraded image quality and potentially incorrect diagnoses. The partial angle reconstruction (PAR) means dividing the CT projection into several consecutive angular segments and reconstructing each segment individually. Although motion estimation and compensation using PAR has been developed and investigated in cardiac CT scans, its potential for reducing motion artifacts in head CT scans remains unexplored. Purpose: To develop a deep learning (DL) model capable of directly estimating head motion from PAR images of head CT scans and to integrate the estimated motion into an iterative reconstruction process to compensate for the motion. Methods: Head motion is considered as a rigid transformation described by six time‐variant variables, including the three variables for translation and three variables for rotation. Each motion variable is modeled using a B‐spline defined by five control points (CP) along time. We split the full projections from 360° into 25 consecutive PARs and subsequently input them into a convolutional neural network (CNN) that outputs the estimated CPs for each motion variable. The estimated CPs are used to calculate the object motion in each projection, which are incorporated into the forward and backprojection of an iterative reconstruction algorithm to reconstruct the motion‐compensated image. The performance of our DL model is evaluated through both simulation and phantom studies. Results: The DL model achieved high accuracy in estimating head motion, as demonstrated in both the simulation study (mean absolute error (MAE) ranging from 0.28 to 0.45 mm or degree across different motion variables) and the phantom study (MAE ranging from 0.40 to 0.48 mm or degree). The resulting motion‐corrected image, IDL,PAR${I}_{DL,\ PAR}$, exhibited a significant reduction in motion artifacts when compared to the traditional filtered back‐projection reconstructions, which is evidenced both in the simulation study (image MAE drops from 178 ±$ \pm $ 33HU to 37 ±$ \pm $ 9HU, structural similarity index (SSIM) increases from 0.60 ±$ \pm $ 0.06 to 0.98 ±$ \pm $ 0.01) and the phantom study (image MAE drops from 117 ±$ \pm $ 17HU to 42 ±$ \pm $ 19HU, SSIM increases from 0.83 ±$ \pm $ 0.04 to 0.98 ±$ \pm $ 0.02). Conclusions: We demonstrate that using PAR and our proposed deep learning model enables accurate estimation of patient head motion and effectively reduces motion artifacts in the resulting head CT images. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00942405
Volume :
51
Issue :
5
Database :
Academic Search Index
Journal :
Medical Physics
Publication Type :
Academic Journal
Accession number :
177083163
Full Text :
https://doi.org/10.1002/mp.17047