Back to Search Start Over

CBCT-to-CT Translation Using Registration-Based Generative Adversarial Networks in Patients with Head and Neck Cancer.

Authors :
Suwanraksa, Chitchaya
Bridhikitti, Jidapa
Liamsuwan, Thiansin
Chaichulee, Sitthichok
Source :
Cancers. Apr2023, Vol. 15 Issue 7, p2017. 23p.
Publication Year :
2023

Abstract

Simple Summary: Cone-beam computed tomography (CBCT) not only plays an important role in image-guided radiation therapy (IGRT) but also has the potential for dose calculation. Because CBCT suffers from poor image quality and uncertainties in the Hounsfield unit (HU) values, the accuracy of dose calculation with CBCT is insufficient for clinical use. This study investigated deep learning approaches that utilize a generative adversarial network (GAN) with an additional registration network (RegNet) to generate synthetic CT (sCT) from CBCT. Our study addressed the limitation of having paired CT and CBCT with their anatomy perfectly aligned for supervised training. RegNet can dynamically estimate the correct labels, enabling supervised learning with noisy labels, whereas GAN learns the bidirectional mapping from CBCT to CT. The HU values for sCT were sufficiently accurate for dose calculation, while preserving the anatomy of CBCT with clear structural boundaries. Recently, deep learning with generative adversarial networks (GANs) has been applied in multi-domain image-to-image translation. This study aims to improve the image quality of cone-beam computed tomography (CBCT) by generating synthetic CT (sCT) that maintains the patient's anatomy as in CBCT, while having the image quality of CT. As CBCT and CT are acquired at different time points, it is challenging to obtain paired images with aligned anatomy for supervised training. To address this limitation, the study incorporated a registration network (RegNet) into GAN during training. RegNet can dynamically estimate the correct labels, allowing supervised learning with noisy labels. The study developed and evaluated the approach using imaging data from 146 patients with head and neck cancer. The results showed that GAN trained with RegNet performed better than those trained without RegNet. Specifically, in the UNIT model trained with RegNet, the mean absolute error (MAE) was reduced from 40.46 to 37.21, the root mean-square error (RMSE) was reduced from 119.45 to 108.86, the peak signal-to-noise ratio (PSNR) was increased from 28.67 to 29.55, and the structural similarity index (SSIM) was increased from 0.8630 to 0.8791. The sCT generated from the model had fewer artifacts and retained the anatomical information as in CBCT. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20726694
Volume :
15
Issue :
7
Database :
Academic Search Index
Journal :
Cancers
Publication Type :
Academic Journal
Accession number :
163044605
Full Text :
https://doi.org/10.3390/cancers15072017