1. A Deep Cascade Architecture for Stroke Lesion Segmentation and Synthetic Parametric Map Generation over CT St
- Author
-
Sebastian Florez, Santiago Gomez, Julian Garcia, and Fabio Martínez
- Subjects
Computed tomography ,Stroke Segmentation ,Parametric map ,Attention ,Psychology ,BF1-990 - Abstract
Stroke is the second leading cause of mortality worldwide. Immediate attention and diagnosis play a crucial role in patient prognosis. Nowadays, computed tomography (CT) is the most utilized diagnostic imaging for early analysis and lesion stroke detection. Nonetheless, acute lesions are not visible on CT, and it is only possible to use this modality in screening analysis, discarding other neurological affectations. Expert radiologists can observe stroke lesions in advanced stages (subacute and chronic), as hypodense regions, but with limited sensibility and remarked subject variability. Computational strategies have been addressed to support lesion segmentation, following deep autoencoders and multimodal inputs. However, these strategies remain limited due to the high variability in the appearance and geometry of stroke lesions. This work introduces a novel deep representation that uses multimodal inputs from CT studies and parametric maps, computed from perfusion (CTP), to retrieve stroke lesions. The architecture follows an autoencoder deep representation, that forces attention on the geometry of stroke through additive cross-attention modules. Besides, a cascade train is herein proposed to generate synthetic perfusion maps that complement multimodal inputs and help with stroke lesion refinement at each stage of processing. The proposed approach brings saliency maps that support observational expert analysis, about lesion localization, but also lead with automatic shape estimation of the stroke. The proposed approach was validated on the ISLES 2018 public dataset with a total of 92 studies that include the annotation of an expert radiologist. The proposed approach achieves a Dice score of 0.66 and a precision of 0.67, outperforming classical autoencoder approximations.
- Published
- 2024
- Full Text
- View/download PDF