Back to Search
Start Over
A medical image classification method based on self‐regularized adversarial learning.
- Source :
-
Medical Physics . Nov2024, Vol. 51 Issue 11, p8232-8246. 15p. - Publication Year :
- 2024
-
Abstract
- Background: Deep learning (DL) techniques have been extensively applied in medical image classification. The unique characteristics of medical imaging data present challenges, including small labeled datasets, severely imbalanced class distribution, and significant variations in imaging quality. Recently, generative adversarial network (GAN)‐based classification methods have gained attention for their ability to enhance classification accuracy by incorporating realistic GAN‐generated images as data augmentation. However, the performance of these GAN‐based methods often relies on high‐quality generated images, while large amounts of training data are required to train GAN models to achieve optimal performance. Purpose: In this study, we propose an adversarial learning‐based classification framework to achieve better classification performance. Innovatively, GAN models are employed as supplementary regularization terms to support classification, aiming to address the challenges described above. Methods: The proposed classification framework, GAN‐DL, consists of a feature extraction network (F‐Net), a classifier, and two adversarial networks, specifically a reconstruction network (R‐Net) and a discriminator network (D‐Net). The F‐Net extracts features from input images, and the classifier uses these features for classification tasks. R‐Net and D‐Net have been designed following the GAN architecture. R‐Net employs the extracted feature to reconstruct the original images, while D‐Net is tasked with the discrimination between the reconstructed image and the original images. An iterative adversarial learning strategy is designed to guide model training by incorporating multiple network‐specific loss functions. These loss functions, serving as supplementary regularization, are automatically derived during the reconstruction process and require no additional data annotation. Results: To verify the model's effectiveness, we performed experiments on two datasets, including a COVID‐19 dataset with 13 958 chest x‐ray images and an oropharyngeal squamous cell carcinoma (OPSCC) dataset with 3255 positron emission tomography images. Thirteen classic DL‐based classification methods were implemented on the same datasets for comparison. Performance metrics included precision, sensitivity, specificity, and F1$F_1$‐score. In addition, we conducted ablation studies to assess the effects of various factors on model performance, including the network depth of F‐Net, training image size, training dataset size, and loss function design. Our method achieved superior performance than all comparative methods. On the COVID‐19 dataset, our method achieved 95.4%±0.6%$95.4\%\pm 0.6\%$, 95.3%±0.9%$95.3\%\pm 0.9\%$, 97.7%±0.4%$97.7\%\pm 0.4\%$, and 95.3%±0.9%$95.3\%\pm 0.9\%$ in terms of precision, sensitivity, specificity, and F1$F_1$‐score, respectively. It achieved 96.2%±0.7%$96.2\%\pm 0.7\%$ across all these metrics on the OPSCC dataset. The study to investigate the effects of two adversarial networks highlights the crucial role of D‐Net in improving model performance. Ablation studies further provide an in‐depth understanding of our methodology. Conclusion: Our adversarial‐based classification framework leverages GAN‐based adversarial networks and an iterative adversarial learning strategy to harness supplementary regularization during training. This design significantly enhances classification accuracy and mitigates overfitting issues in medical image datasets. Moreover, its modular design not only demonstrates flexibility but also indicates its potential applicability to various clinical contexts and medical imaging applications. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00942405
- Volume :
- 51
- Issue :
- 11
- Database :
- Academic Search Index
- Journal :
- Medical Physics
- Publication Type :
- Academic Journal
- Accession number :
- 180622916
- Full Text :
- https://doi.org/10.1002/mp.17320