Back to Search
Start Over
Artifact suppression for sparse view CT via transformer-based generative adversarial network.
- Source :
- Biomedical Signal Processing & Control; Sep2024:Part B, Vol. 95, pN.PAG-N.PAG, 1p
- Publication Year :
- 2024
-
Abstract
- • A novel encoder-decoder transformer-based generative adversarial network designed to suppress sparse view CT image artifact. • In Transformer, we utilized the multi-Dconv head transposed attention module, enhancing its ability of features extraction. • To improve structure and detail recovery performance, we adopted the gated-Dconv feed-forward network in Transformer. • Within the GAN learning framework, we adopted a discriminator to enhance the ability of the generator. Sparse view CT images are often severely degraded by streak artifacts. Numerous studies have confirmed the remarkable progress made by deep learning (DL) in sparse view CT imaging scenarios. However, the mainstream CNN-based methods are inefficient when capturing feature information in large regions. In this paper, a transformer based generative adversarial network (SVT-GAN), which is designed to efficiently suppress artifacts in sparse view CT images, is proposed. We leverage the advantages of transformer networks and adversarial learning into a framework to improve the quality of sparse view CT image restoration results. The generator is primarily composed of an encoder-decoder structure that relies on the transformer model to learn multiscale local–global representations and leverage contextual information derived from distant artifacts. Moreover, in contrast with the standard transformer model, we utilize the multi-Dconv head-transposed attention (MDTA) module to enhance the ability of the proposed approach to extract both local and nonlocal information and produce impressive structure and detail restoration results. To suppress the transformation of artifact features, the gated-Dconv feedforward network (GDFN) is utilized. Within the GAN learning framework, we employ a simple nine-layer network as the discriminator to enhance the ability of the generator to suppress artifacts and retain features. Compared with the recently developed state-of-the-art methods, the proposed model significantly reduces serious noise artifacts while preserving details on the AAPM and Real CT datasets. Qualitative and quantitative assessments demonstrate the competitive performance of the SVT-GAN. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 17468094
- Volume :
- 95
- Database :
- Supplemental Index
- Journal :
- Biomedical Signal Processing & Control
- Publication Type :
- Academic Journal
- Accession number :
- 177848271
- Full Text :
- https://doi.org/10.1016/j.bspc.2024.106297