1. Segmentation-Free Outcome Prediction from Head and Neck Cancer PET/CT Images: Deep Learning-Based Feature Extraction from Multi-Angle Maximum Intensity Projections (MA-MIPs).
- Author
-
Toosi, Amirhosein, Shiri, Isaac, Zaidi, Habib, and Rahmim, Arman
- Subjects
- *
LYMPH nodes , *RADIOPHARMACEUTICALS , *COMPUTER-assisted image analysis (Medicine) , *TASK performance , *RESEARCH funding , *COMPUTED tomography , *HEAD & neck cancer , *DEOXY sugars , *POSITRON emission tomography , *DESCRIPTIVE statistics , *DEEP learning , *ARTIFICIAL neural networks , *PROGRESSION-free survival , *MACHINE learning , *AUTOMATION , *EVALUATION - Abstract
Simple Summary: Head and neck cancer is a serious health concern that affects millions of people across the globe. Predicting how patients will respond to therapy is critical for providing optimal care. To make these predictions, one may first manually identify tumour boundaries on medical imaging in order to obtain the necessary information. Manually establishing tumour borders, however, is both costly and time-consuming, and prone to error and disagreement. Our AI-based research provides a unique method for predicting patient outcome that eliminates the need for manual delineation steps. Instead, we collect information from the patient's whole head and neck area on PET scans as "looked upon" from a variety of angles. Our technique is faster, more consistent, and more precise than existing methods, with the potential to help doctors deliver better care to patients. We introduce an innovative, simple, effective segmentation-free approach for survival analysis of head and neck cancer (HNC) patients from PET/CT images. By harnessing deep learning-based feature extraction techniques and multi-angle maximum intensity projections (MA-MIPs) applied to Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) images, our proposed method eliminates the need for manual segmentations of regions-of-interest (ROIs) such as primary tumors and involved lymph nodes. Instead, a state-of-the-art object detection model is trained utilizing the CT images to perform automatic cropping of the head and neck anatomical area, instead of only the lesions or involved lymph nodes on the PET volumes. A pre-trained deep convolutional neural network backbone is then utilized to extract deep features from MA-MIPs obtained from 72 multi-angel axial rotations of the cropped PET volumes. These deep features extracted from multiple projection views of the PET volumes are then aggregated and fused, and employed to perform recurrence-free survival analysis on a cohort of 489 HNC patients. The proposed approach outperforms the best performing method on the target dataset for the task of recurrence-free survival analysis. By circumventing the manual delineation of the malignancies on the FDG PET-CT images, our approach eliminates the dependency on subjective interpretations and highly enhances the reproducibility of the proposed survival analysis method. The code for this work is publicly released. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF