Back to Search
Start Over
Cross‐modality (CT‐MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets.
- Source :
- Medical Physics; Oct2019, Vol. 46 Issue 10, p4392-4404, 13p
- Publication Year :
- 2019
-
Abstract
- Purpose: Accurate tumor segmentation is a requirement for magnetic resonance (MR)‐based radiotherapy. Lack of large expert annotated MR datasets makes training deep learning models difficult. Therefore, a cross‐modality (MR‐CT) deep learning segmentation approach that augments training data using pseudo MR images produced by transforming expert‐segmented CT images was developed. Methods: Eighty‐one T2‐weighted MRI scans from 28 patients with non‐small cell lung cancers (nine with pretreatment and weekly MRI and the remainder with pre‐treatment MRI scans) were analyzed. Cross‐modality model encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning network. This model was used to translate 377 expert segmented non‐small cell lung cancer CT scans from the Cancer Imaging Archive into pseudo MRI that served as additional training set. This method was benchmarked against shallow learning using random forest, standard data augmentation, and three state‐of‐the art adversarial learning‐based cross‐modality data (pseudo MR) augmentation methods. Segmentation accuracy was computed using Dice similarity coefficient (DSC), Hausdorff distance metrics, and volume ratio. Results: The proposed approach produced the lowest statistical variability in the intensity distribution between pseudo and T2w MR images measured as Kullback–Leibler divergence of 0.069. This method produced the highest segmentation accuracy with a DSC of (0.75 ± 0.12) and the lowest Hausdorff distance of (9.36 mm ± 6.00 mm) on the test dataset using a U‐Net structure. This approach produced highly similar estimations of tumor growth as an expert (P = 0.37). Conclusions: A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross‐modality information using a model that explicitly incorporates knowledge of tumors in modality translation to augment segmentation training. The results show the feasibility of the approach and the corresponding improvement over the state‐of‐the‐art methods. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00942405
- Volume :
- 46
- Issue :
- 10
- Database :
- Complementary Index
- Journal :
- Medical Physics
- Publication Type :
- Academic Journal
- Accession number :
- 139190152
- Full Text :
- https://doi.org/10.1002/mp.13695