Back to Search
Start Over
Lesion segmentation using 3D scan and deep learning for the evaluation of facial portwine stain birthmarks.
- Source :
- Photodiagnosis & Photodynamic Therapy; Apr2024, Vol. 46, pN.PAG-N.PAG, 1p
- Publication Year :
- 2024
-
Abstract
- • An improved DeepLabV3+ network (IDeepLabV3+) for PWS lesion segmentation was developed based on the classic DeepLabV3+ architecture. • The deep learning-based semantic segmentation method can automatically segment PWS lesion of different color and shape in texture mapping of 3D images. • Semantic segmentation can be used together with 3D scan for the evaluation of area of facial PWS lesions. Portwine stain (PWS) birthmarks are congenital vascular malformations. The quantification of PWS area is an important step in lesion classification and treatment evaluation. The aim of this study was to evaluate the combination of 3D scan with deep learning for automated PWS area quantization. PWS color was measured using a portable spectrophotometer. PWS patches (29.26–45.82 cm<superscript>2</superscript>) of different color and shape were generated for 2D and 3D PWS model. 3D images were acquired by a handheld 3D scanner to create texture maps. For semantic segmentation, an improved DeepLabV3+ network was developed for PWS lesion extraction from texture mapping of 3D images. In order to achieve accurate extraction of lesion regions, the convolutional block attention module (CBAM) and DENSE were introduced and the network was trained under Ranger optimizer. The performance of different backbone networks for PWS lesion extraction were also compared. IDeepLabV3+ (Xception) showed the best results in PWS lesion extraction and area quantification. Its mean Intersection over Union (MIou) was 0.9797, Mean Pixel Accuracy (MPA) 0.9908, Accuracy 0.9989, Recall 0.9886 and F1-score 0.9897, respectively. In PWS area quantization, the mean value of the area error rate of this scheme was 2.61 ± 2.33. The new 3D method developed in this study was able to achieve accurate quantification of PWS lesion area and has potentials for clinical applications. PWS patches of different color and shape were use to create 2D and 3D models. 3D images were acquired by 3D scanner to create corresponding texture maps. Deep learning automatic segmentation using different backbone networks was used to extract PWS lesion from texture mapping of 3D images. [Display omitted] [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15721000
- Volume :
- 46
- Database :
- Supplemental Index
- Journal :
- Photodiagnosis & Photodynamic Therapy
- Publication Type :
- Academic Journal
- Accession number :
- 177515308
- Full Text :
- https://doi.org/10.1016/j.pdpdt.2024.104030