18 results on '"Bontempi, Dennis"'
Search Results
2. Enrichment of lung cancer computed tomography collections with AI-derived annotations
- Author
-
Krishnaswamy, Deepa, Bontempi, Dennis, Thiriveedhi, Vamsi Krishna, Punzo, Davide, Clunie, David, Bridge, Christopher P., Aerts, Hugo J. W. L., Kikinis, Ron, and Fedorov, Andrey
- Published
- 2024
- Full Text
- View/download PDF
3. Foundation model for cancer imaging biomarkers
- Author
-
Pai, Suraj, Bontempi, Dennis, Hadzic, Ibrahim, Prudente, Vasco, Sokač, Mateo, Chaunzwa, Tafadzwa L., Bernatz, Simon, Hosny, Ahmed, Mak, Raymond H., Birkbak, Nicolai J., and Aerts, Hugo J. W. L.
- Published
- 2024
- Full Text
- View/download PDF
4. Deep learning to estimate lung disease mortality from chest radiographs
- Author
-
Weiss, Jakob, Raghu, Vineet K., Bontempi, Dennis, Christiani, David C., Mak, Raymond H., Lu, Michael T., and Aerts, Hugo J.W.L.
- Published
- 2023
- Full Text
- View/download PDF
5. Screening for extranodal extension in HPV-associated oropharyngeal carcinoma: evaluation of a CT-based deep learning algorithm in patient data from a multicentre, randomised de-escalation trial
- Author
-
Kann, Benjamin H, Likitlersuang, Jirapat, Bontempi, Dennis, Ye, Zezhong, Aneja, Sanjay, Bakst, Richard, Kelly, Hillary R, Juliano, Amy F, Payabvash, Sam, Guenette, Jeffrey P, Uppaluri, Ravindra, Margalit, Danielle N, Schoenfeld, Jonathan D, Tishler, Roy B, Haddad, Robert, Aerts, Hugo J W L, Garcia, Joaquin J, Flamand, Yael, Subramaniam, Rathan M, Burtness, Barbara A, and Ferris, Robert L
- Published
- 2023
- Full Text
- View/download PDF
6. Computed tomography-based radiomics for the differential diagnosis of pneumonitis in stage IV non-small cell lung cancer patients treated with immune checkpoint inhibitors
- Author
-
Tohidinezhad, Fariba, Bontempi, Dennis, Zhang, Zhen, Dingemans, Anne-Marie, Aerts, Joachim, Bootsma, Gerben, Vansteenkiste, Johan, Hashemi, Sayed, Smit, Egbert, Gietema, Hester, Aerts, Hugo JWL., Dekker, Andre, Hendriks, Lizza E.L., Traverso, Alberto, and De Ruysscher, Dirk
- Published
- 2023
- Full Text
- View/download PDF
7. 1733: Factors associated with cardiac toxicity after radical radiotherapy in patients with lung cancer
- Author
-
Tohidinezhad, Fariba, Nürnberg, Leonard, Vaassen, Femke, Bontempi, Dennis, Bekke, Rachel Ter, Hendriks, Lizza, Traverso, Alberto, Dekker, Andre, and De Ruysscher, Dirk
- Published
- 2024
- Full Text
- View/download PDF
8. Prospective deployment of an automated implementation solution for artificial intelligence translation to clinical radiation oncology.
- Author
-
Kehayias, Christopher E., Yujie Yan, Bontempi, Dennis, Quirk, Sarah, Bitterman, Danielle S., Bredfeldt, Jeremy S., Aerts, Hugo J. W. L., Mak, Raymond H., and Guthier, Christian V.
- Subjects
ARTIFICIAL intelligence ,MEDICAL dosimetry ,ACADEMIC departments ,SOFTWARE development tools ,DEEP learning - Abstract
Introduction: Artificial intelligence (AI)-based technologies embody countless solutions in radiation oncology, yet translation of AI-assisted software tools to actual clinical environments remains unrealized. We present the Deep Learning On-Demand Assistant (DL-ODA), a fully automated, end-to-end clinical platform that enables AI interventions for any disease site featuring an automated model-training pipeline, autosegmentations, and QA reporting. Materials and methods: We developed, tested, and prospectively deployed the DL-ODA system at a large university affiliated hospital center. Medical professionals activate the DL-ODA via two pathways (1): On-Demand, used for immediate AI decision support for a patient-specific treatment plan, and (2) Ambient, in which QA is provided for all daily radiotherapy (RT) plans by comparing DL segmentations with manual delineations and calculating the dosimetric impact. To demonstrate the implementation of a new anatomy segmentation, we used the model-training pipeline to generate a breast segmentation model based on a large clinical dataset. Additionally, the contour QA functionality of existing models was assessed using a retrospective cohort of 3,399 lung and 885 spine RT cases. Ambient QA was performed for various disease sites including spine RT and heart for dosimetric sparing. Results: Successful training of the breast model was completed in less than a day and resulted in clinically viable whole breast contours. For the retrospective analysis, we evaluated manual-versus-AI similarity for the ten most common structures. The DL-ODA detected high similarities in heart, lung, liver, and kidney delineations but lower for esophagus, trachea, stomach, and small bowel due largely to incomplete manual contouring. The deployed Ambient QAs for heart and spine sites have prospectively processed over 2,500 cases and 230 cases over 9 months and 5 months, respectively, automatically alerting the RT personnel. Discussion: The DL-ODA capabilities in providing universal AI interventions were demonstrated for On-Demand contour QA, DL segmentations, and automated model training, and confirmed successful integration of the system into a large academic radiotherapy department. The novelty of deploying the DL-ODA as a multi-modal, fully automated end-to-end AI clinical implementation solution marks a significant step towards a generalizable framework that leverages AI to improve the efficiency and reliability of RT systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Oropharyngeal Tumour Segmentation Using Ensemble 3D PET-CT Fusion Networks for the HECKTOR Challenge
- Author
-
Rao, Chinmay, Pai, Suraj, Hadzic, Ibrahim, Zhovannik, Ivan, Bontempi, Dennis, Dekker, Andre, Teuwen, Jonas, Traverso, Alberto, Andrearczyk, V., Oreiller, V., Depeursinge, A., Beeldvorming, RS: GROW - R3 - Innovative Cancer Diagnostics & Therapy, and Radiotherapie
- Subjects
PET-CT ,business.industry ,Computer science ,Test set ,Automatic segmentation ,Context (language use) ,Pattern recognition ,Dice ,Artificial intelligence ,Tomography ,Radiotherapy treatment planning ,business ,Tumour segmentation - Abstract
Automatic segmentation of tumours and organs at risk can function as a useful support tool in radiotherapy treatment planning as well as for validating radiomics studies on larger cohorts. In this paper, we developed robust automatic segmentation methods for the delineation of gross tumour volumes (GTVs) from planning Computed Tomography (CT) and FDG-Positron Emission Tomography (PET) images of head and neck cancer patients. The data was supplied as part of the MICCAI 2020 HECKTOR challenge. We developed two main volumetric approaches: A) an end-to-end volumetric approach and B) a slice-by-slice prediction approach that integrates 3D context around the slice of interest. We exploited differences in the representations provided by these two approaches by ensembling them, obtaining a Dice score of 66.9% on the held out validation set. On an external and independent test set, a final Dice score of 58.7% was achieved.
- Published
- 2021
10. CEREBRUM‐7T: Fast and Fully Volumetric Brain Segmentation of 7 Tesla MR Volumes.
- Author
-
Svanera, Michele, Benini, Sergio, Bontempi, Dennis, and Muckli, Lars
- Subjects
MAGNETIC resonance imaging ,BRAIN imaging ,FUNCTIONAL magnetic resonance imaging ,CONVOLUTIONAL neural networks ,IMAGE analysis ,IMAGE segmentation - Abstract
Ultra‐high‐field magnetic resonance imaging (MRI) enables sub‐millimetre resolution imaging of the human brain, allowing the study of functional circuits of cortical layers at the meso‐scale. An essential step in many functional and structural neuroimaging studies is segmentation, the operation of partitioning the MR images in anatomical structures. Despite recent efforts in brain imaging analysis, the literature lacks in accurate and fast methods for segmenting 7‐tesla (7T) brain MRI. We here present CEREBRUM‐7T, an optimised end‐to‐end convolutional neural network, which allows fully automatic segmentation of a whole 7T T1w MRI brain volume at once, without partitioning the volume, pre‐processing, nor aligning it to an atlas. The trained model is able to produce accurate multi‐structure segmentation masks on six different classes plus background in only a few seconds. The experimental part, a combination of objective numerical evaluations and subjective analysis, confirms that the proposed solution outperforms the training labels it was trained on and is suitable for neuroimaging studies, such as layer functional MRI studies. Taking advantage of a fine‐tuning operation on a reduced set of volumes, we also show how it is possible to effectively apply CEREBRUM‐7T to different sites data. Furthermore, we release the code, 7T data, and other materials, including the training labels and the Turing test. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. 147 - DEEP LEARNING SEGMENTATION OF HEART SUBSTRUCTURES IN RADIOTHERAPY TREATMENT PLANNING
- Author
-
Nürnberg, Leonard, Bontempi, Dennis, De Ruysscher, Dirk, Dekker, Andre, Quesada, Enrique Hortal, Canters, Richard, and Traverso, Alberto
- Published
- 2022
- Full Text
- View/download PDF
12. Using virtual clinical trials to determine the accuracy of AI-based quantitative imaging biomarkers in oncology trials using standard-of-care CT.
- Author
-
Byrd, Darrin, Bontempi, Dennis, Yang, Hao, Aerts, Hugo, Zhao, Binzhang, Fedorov, Andrey, Schwartz, Lawrence, Allison, Tavis, Moscowitz, Chaya, and Kinahan, Paul
- Published
- 2022
- Full Text
- View/download PDF
13. Segmentation Uncertainty Estimation as a Sanity Check for Image Biomarker Studies.
- Author
-
Zhovannik, Ivan, Bontempi, Dennis, Romita, Alessio, Pfaehler, Elisabeth, Primakov, Sergey, Dekker, Andre, Bussink, Johan, Traverso, Alberto, and Monshouwer, René
- Subjects
- *
LUNG cancer prognosis , *LUNG cancer , *DIGITAL image processing , *BIOMARKERS , *LOG-rank test , *MAGNETIC resonance imaging , *DIAGNOSTIC imaging , *PREDICTION models , *ALGORITHMS , *PROPORTIONAL hazards models , *EVALUATION ,RESEARCH evaluation - Abstract
Simple Summary: Radiomics is referred to as quantitative image biomarker analysis. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, the radiomic biomarkers lack reproducibility. In this manuscript, we show how this protocol-induced uncertainty can drastically reduce prognostic model performance and propose some insights on how to use it for developing better prognostic models. Problem. Image biomarker analysis, also known as radiomics, is a tool for tissue characterization and treatment prognosis that relies on routinely acquired clinical images and delineations. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, radiomics often lack reproducibility. Radiomics harmonization techniques have been proposed as a solution to reduce these sources of uncertainty and/or their influence on the prognostic model performance. A relevant question is how to estimate the protocol-induced uncertainty of a specific image biomarker, what the effect is on the model performance, and how to optimize the model given the uncertainty. Methods. Two non-small cell lung cancer (NSCLC) cohorts, composed of 421 and 240 patients, respectively, were used for training and testing. Per patient, a Monte Carlo algorithm was used to generate three hundred synthetic contours with a surface dice tolerance measure of less than 1.18 mm with respect to the original GTV. These contours were subsequently used to derive 104 radiomic features, which were ranked on their relative sensitivity to contour perturbation, expressed in the parameter η. The top four (low η) and the bottom four (high η) features were selected for two models based on the Cox proportional hazards model. To investigate the influence of segmentation uncertainty on the prognostic model, we trained and tested the setup in 5000 augmented realizations (using a Monte Carlo sampling method); the log-rank test was used to assess the stratification performance and stability of segmentation uncertainty. Results. Although both low and high η setup showed significant testing set log-rank p-values (p = 0.01) in the original GTV delineations (without segmentation uncertainty introduced), in the model with high uncertainty, to effect ratio, only around 30% of the augmented realizations resulted in model performance with p < 0.05 in the test set. In contrast, the low η setup performed with a log-rank p < 0.05 in 90% of the augmented realizations. Moreover, the high η setup classification was uncertain in its predictions for 50% of the subjects in the testing set (for 80% agreement rate), whereas the low η setup was uncertain only in 10% of the cases. Discussion. Estimating image biomarker model performance based only on the original GTV segmentation, without considering segmentation, uncertainty may be deceiving. The model might result in a significant stratification performance, but can be unstable for delineation variations, which are inherent to manual segmentation. Simulating segmentation uncertainty using the method described allows for more stable image biomarker estimation, selection, and model development. The segmentation uncertainty estimation method described here is universal and can be extended to estimate other protocol uncertainties (such as image acquisition and pre-processing). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. CEREBRUM: a fast and fully-volumetric Convolutional Encoder-decodeR for weakly-supervised sEgmentation of BRain strUctures from out-of-the-scanner MRI.
- Author
-
Bontempi, Dennis, Benini, Sergio, Signoroni, Alberto, Svanera, Michele, and Muckli, Lars
- Subjects
- *
CONVOLUTIONAL neural networks , *BRAIN , *SUPERVISED learning , *IMAGE segmentation , *MORPHOMETRICS , *ARCHITECTURAL design , *MAGNETIC resonance imaging , *LEARNING strategies - Abstract
• The first fully-volumetric CNN based approach for multi-structure brain MRI segmentation. • Architecture designed to enable the processing of a whole MRI volume without introducing any partitioning. • Enables leveraging of both local (voxel-level) and global (spatial) features. • Weakly-supervised training (exploiting automatic atlas-based segmentation) on large pool of out-of-the-scanner volumes (900 brain scans). • Experimental results, both quantitative (comparison with state-of-the-art approaches) and qualitative (survey of experts), supports the adoption of our approach. Many functional and structural neuroimaging studies call for accurate morphometric segmentation of different brain structures starting from image intensity values of MRI scans. Current automatic (multi-) atlas-based segmentation strategies often lack accuracy on difficult-to-segment brain structures and, since these methods rely on atlas-to-scan alignment, they may take long processing times. Alternatively, recent methods deploying solutions based on Convolutional Neural Networks (CNNs) are enabling the direct analysis of out-of-the-scanner data. However, current CNN-based solutions partition the test volume into 2D or 3D patches, which are processed independently. This process entails a loss of global contextual information, thereby negatively impacting the segmentation accuracy. In this work, we design and test an optimised end-to-end CNN architecture that makes the exploitation of global spatial information computationally tractable, allowing to process a whole MRI volume at once. We adopt a weakly supervised learning strategy by exploiting a large dataset composed of 947 out-of-the-scanner (3 Tesla T1-weighted 1mm isotropic MP-RAGE 3D sequences) MR Images. The resulting model is able to produce accurate multi-structure segmentation results in only a few seconds. Different quantitative measures demonstrate an improved accuracy of our solution when compared to state-of-the-art techniques. Moreover, through a randomised survey involving expert neuroscientists, we show that subjective judgements favour our solution with respect to widely adopted atlas-based software. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence.
- Author
-
Fedorov A, Longabaugh WJR, Pot D, Clunie DA, Pieper SD, Gibbs DL, Bridge C, Herrmann MD, Homeyer A, Lewis R, Aerts HJWL, Krishnaswamy D, Thiriveedhi VK, Ciausu C, Schacherer DP, Bontempi D, Pihl T, Wagner U, Farahani K, Kim E, and Kikinis R
- Subjects
- United States, Humans, National Cancer Institute (U.S.), Reproducibility of Results, Diagnostic Imaging, Multiomics, Artificial Intelligence, Neoplasms diagnostic imaging
- Abstract
The remarkable advances of artificial intelligence (AI) technology are revolutionizing established approaches to the acquisition, interpretation, and analysis of biomedical imaging data. Development, validation, and continuous refinement of AI tools requires easy access to large high-quality annotated datasets, which are both representative and diverse. The National Cancer Institute (NCI) Imaging Data Commons (IDC) hosts large and diverse publicly available cancer image data collections. By harmonizing all data based on industry standards and colocalizing it with analysis and exploration resources, the IDC aims to facilitate the development, validation, and clinical translation of AI tools and address the well-documented challenges of establishing reproducible and transparent AI processing pipelines. Balanced use of established commercial products with open-source solutions, interconnected by standard interfaces, provides value and performance, while preserving sufficient agility to address the evolving needs of the research community. Emphasis on the development of tools, use cases to demonstrate the utility of uniform data representation, and cloud-based analysis aim to ease adoption and help define best practices. Integration with other data in the broader NCI Cancer Research Data Commons infrastructure opens opportunities for multiomics studies incorporating imaging data to further empower the research community to accelerate breakthroughs in cancer detection, diagnosis, and treatment. Published under a CC BY 4.0 license.
- Published
- 2023
- Full Text
- View/download PDF
16. Decoding biological age from face photographs using deep learning.
- Author
-
Zalay O, Bontempi D, Bitterman DS, Birkbak N, Shyr D, Haugg F, Qian JM, Roberts H, Perni S, Prudente V, Pai S, Dekker A, Haibe-Kains B, Guthier C, Balboni T, Warren L, Krishan M, Kann BH, Swanton C, Ruysscher D, Mak RH, and Aerts HJ
- Abstract
Because humans age at different rates, a person's physical appearance may yield insights into their biological age and physiological health more reliably than their chronological age. In medicine, however, appearance is incorporated into medical judgments in a subjective and non-standardized fashion. In this study, we developed and validated FaceAge, a deep learning system to estimate biological age from easily obtainable and low-cost face photographs. FaceAge was trained on data from 58,851 healthy individuals, and clinical utility was evaluated on data from 6,196 patients with cancer diagnoses from two institutions in the United States and The Netherlands. To assess the prognostic relevance of FaceAge estimation, we performed Kaplan Meier survival analysis. To test a relevant clinical application of FaceAge, we assessed the performance of FaceAge in end-of-life patients with metastatic cancer who received palliative treatment by incorporating FaceAge into clinical prediction models. We found that, on average, cancer patients look older than their chronological age, and looking older is correlated with worse overall survival. FaceAge demonstrated significant independent prognostic performance in a range of cancer types and stages. We found that FaceAge can improve physicians' survival predictions in incurable patients receiving palliative treatments, highlighting the clinical utility of the algorithm to support end-of-life decision-making. FaceAge was also significantly associated with molecular mechanisms of senescence through gene analysis, while age was not. These findings may extend to diseases beyond cancer, motivating using deep learning algorithms to translate a patient's visual appearance into objective, quantitative, and clinically useful measures.
- Published
- 2023
- Full Text
- View/download PDF
17. Foundation Models for Quantitative Biomarker Discovery in Cancer Imaging.
- Author
-
Pai S, Bontempi D, Prudente V, Hadzic I, Sokač M, Chaunzwa TL, Bernatz S, Hosny A, Mak RH, Birkbak NJ, and Aerts HJ
- Abstract
Foundation models represent a recent paradigm shift in deep learning, where a single large-scale model trained on vast amounts of data can serve as the foundation for various downstream tasks. Foundation models are generally trained using self-supervised learning and excel in reducing the demand for training samples in downstream applications. This is especially important in medicine, where large labeled datasets are often scarce. Here, we developed a foundation model for imaging biomarker discovery by training a convolutional encoder through self-supervised learning using a comprehensive dataset of 11,467 radiographic lesions. The foundation model was evaluated in distinct and clinically relevant applications of imaging-based biomarkers. We found that they facilitated better and more efficient learning of imaging biomarkers and yielded task-specific models that significantly outperformed their conventional supervised counterparts on downstream tasks. The performance gain was most prominent when training dataset sizes were very limited. Furthermore, foundation models were more stable to input and inter-reader variations and showed stronger associations with underlying biology. Our results demonstrate the tremendous potential of foundation models in discovering novel imaging biomarkers that may extend to other clinical use cases and can accelerate the widespread translation of imaging biomarkers into clinical settings., Competing Interests: COMPETING INTERESTS The authors declare no competing interests.
- Published
- 2023
- Full Text
- View/download PDF
18. NCI Imaging Data Commons.
- Author
-
Fedorov A, Longabaugh WJR, Pot D, Clunie DA, Pieper S, Aerts HJWL, Homeyer A, Lewis R, Akbarzadeh A, Bontempi D, Clifford W, Herrmann MD, Höfener H, Octaviano I, Osborne C, Paquette S, Petts J, Punzo D, Reyes M, Schacherer DP, Tian M, White G, Ziegler E, Shmulevich I, Pihl T, Wagner U, Farahani K, and Kikinis R
- Subjects
- Biomedical Research trends, Cloud Computing, Computational Biology methods, Computer Graphics, Computer Security, Data Interpretation, Statistical, Databases, Factual, Diagnostic Imaging standards, Humans, Image Processing, Computer-Assisted, Pilot Projects, Programming Languages, Radiology methods, Radiology standards, Reproducibility of Results, Software, United States, User-Computer Interface, Diagnostic Imaging methods, National Cancer Institute (U.S.), Neoplasms diagnostic imaging, Neoplasms genetics
- Abstract
The National Cancer Institute (NCI) Cancer Research Data Commons (CRDC) aims to establish a national cloud-based data science infrastructure. Imaging Data Commons (IDC) is a new component of CRDC supported by the Cancer Moonshot. The goal of IDC is to enable a broad spectrum of cancer researchers, with and without imaging expertise, to easily access and explore the value of deidentified imaging data and to support integrated analyses with nonimaging data. We achieve this goal by colocating versatile imaging collections with cloud-based computing resources and data exploration, visualization, and analysis tools. The IDC pilot was released in October 2020 and is being continuously populated with radiology and histopathology collections. IDC provides access to curated imaging collections, accompanied by documentation, a user forum, and a growing number of analysis use cases that aim to demonstrate the value of a data commons framework applied to cancer imaging research. SIGNIFICANCE: This study introduces NCI Imaging Data Commons, a new repository of the NCI Cancer Research Data Commons, which will support cancer imaging research on the cloud., (©2021 The Authors; Published by the American Association for Cancer Research.)
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.