1. The development of an attention mechanism enhanced deep learning model and its application for body composition assessment with L3 CT images
- Author
-
Liang Zhang, Jiao Li, Zhi Yang, Jun Yan, Lin Zhang, and Long-bo Gong
- Subjects
Segmentation ,Body composition ,UNet ,Adipose tissue ,Skeleton muscle ,Medicine ,Science - Abstract
Abstract Body composition assessment is very useful for evaluating a patient’s status in the clinic, but recognizing, labeling, and calculating the body compositions would be burdensome. This study aims to develop a web-based service that could automate calculating the areas of skeleton muscle (SM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT) according to L3 computed tomography (CT) images. 1500 L3 CT images were gathered from Xuzhou Central Hospital. Of these, 70% were used as the training dataset, while the remaining 30% were used as the validating dataset. The UNet framework was combined with attention gate (AG), Squeeze and Excitation block (SEblock), and Atrous Spatial Pyramid Pooling (ASSP) modules to construct the segmentation deep learning model. The model’s efficacy was externally validated using two other test datasets with multiple metrics, the consistency test and manual result checking. A graphic user interface was also created and deployed using the Streamlit Python package. The custom deep learning model named L3 Body Composition Segmentation Model (L3BCSM) was constructed. The model’s Median Dice is 0.954(0.930, 0.963)(SATA), 0.849(0.774,0.901)(VATA), and 0.920(0.901, 0.936)(SMA), which is equal to or better than classic models, including UNETR and AHNet. L3BCSM also achieved satisfactory metrics in two external test datasets, consistent with the qualified label. An internet-based application was developed using L3BCSM, which has four functional modules: population analysis, time series analysis, consistency analysis, and manual result checking. The body composition assessment application was well developed, which would benefit the clinical practice and related research.
- Published
- 2024
- Full Text
- View/download PDF