96 results on '"van Ooijen PMA"'
Search Results
2. Relationship between the number of new nodules and lung cancer probability in incidence screening rounds of CT lung cancer screening: The NELSON study
- Author
-
Walter, JE, Heuvelmans, MA, de Bock, GH, Yousaf, Uraujh, Groen, HJM, van der Aalst, Carlijn, Nackaerts, K, van Ooijen, PMA, de Koning, Harry, Vliegenthart, R, Oudkerk, M, Walter, JE, Heuvelmans, MA, de Bock, GH, Yousaf, Uraujh, Groen, HJM, van der Aalst, Carlijn, Nackaerts, K, van Ooijen, PMA, de Koning, Harry, Vliegenthart, R, and Oudkerk, M
- Published
- 2018
3. Is aortoiliac calcification linked to colorectal anastomotic leakage? A case-control study (vol 25, pg 123, 2016)
- Author
-
Boersema, Simone, Vakalopoulos, Konstantinos, Kock, MCJM, van Ooijen, PMA, Havenga, K, Kleinrensink, J, Jeekel, J (Hans), Lange, Johan, Surgery, and Neurosciences
- Published
- 2016
4. PACS storage requirements—influence of changes in imaging modalities
- Author
-
van Ooijen, PMA, ten Bhomer, PJM, Oudkerk, M, Lemke, HU, Inamura, K, Doi, K, Vannier, MW, and Farman, AG
- Subjects
imaging modalities ,Engineering ,medicine.medical_specialty ,Modalities ,Modality (human–computer interaction) ,medicine.diagnostic_test ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Magnetic resonance imaging ,General Medicine ,University hospital ,Imaging modalities ,DICOM ,Picture archiving and communication system ,Capacity planning ,storage requirements ,data production ,medicine ,Medical physics ,business ,PACS - Abstract
In current radiology departments, imaging modalities are changing rapidly. One reason for these changes is the continuous technical development providing new imaging modalities with higher spatial and temporal resolution and increased capabilities. Another important reason for change is the replacement of non-digital modalities by digital DICOM modalities to ensure easy storage into the Picture Archiving and Communication System (PACS). Besides the impact on working procedures for both radiologists and technologists, these changes in imaging modalities also influence the storage requirements of the PACS. In the University Hospital Groningen, digital archiving of imaging data started late 1999. From this time on, monthly production figures were collected from different modality types and the dates of changes in imaging modalities were recorded. Our results show that increase in data production is not limited to the recent advancement of multi-detector computed tomography (MDCT), but also new ultrasound (US) devices, magnetic resonance imaging (MRI), and X-ray angiography (XA) could have major impact on the storage capacity requirements. Our measurements show that PACS storage capacity planning should take into account a large margin for growth because of image modality replacement. (c) 2005 CARS & Elsevier B.V. All rights reserved.
- Published
- 2005
5. Multi-detector CT and 3D imaging in a multi-vendor PACS environment
- Author
-
van Ooijen, PMA, Witkamp, R, Oudkerk, M, Lemke, HU, Inamura, K, Doi, K, Vannier, MW, Farman, AG, and Reiber, JHC
- Subjects
medicine.diagnostic_test ,Vendor ,business.industry ,Computer science ,3d image processing ,Computed tomography ,General Medicine ,Communications system ,systems integration ,Multi detector ct ,Software ,Picture archiving and communication system ,medicine ,multi detector CT ,System integration ,Computer vision ,Artificial intelligence ,3D image processing ,business ,PACS ,Computer hardware - Abstract
Introduction of new hard- and software techniques like Multi-Dectector Computed Tomography (MDCT) and 3D imaging has put new demands on the Picture Archiving and Communications System (PACS) environment within the radiology department. The daily use of these new techniques requires a good integration of these techniques within the PACS environment. Requirements should be made for the accessibility (ease and speed) of the large amounts of data and for the availability of 3D imaging. We feel that with good system integration of a multi-vendor environment these requirements can be met. This resulted in the environment proposed in this paper, which is installed at our institution. (C) 2003 Published by Elsevier Science B.V.
- Published
- 2003
6. DICOM storage into PACS of out-hospital CD-ROMs - a half year experience report
- Author
-
van Ooijen, PMA, ten Bhomer, PJM, Blecourt, MJ, Roosjen, R, van Dam, R, Oudkerk, A, Lemke, HU, Inamura, K, Doi, K, Vannier, MW, and Farman, AG
- Subjects
Workstation ,Computer science ,business.industry ,Vendor ,General Medicine ,computer.software_genre ,law.invention ,DICOM ,Software ,Workflow ,Picture archiving and communication system ,CD-ROM ,law ,Header ,Operating system ,portable data interchange (PDI) ,IHE ,business ,computer ,PACS - Abstract
With the advance of digitalization of radiology departments, the interchange of data between institutions is shifting towards shipment of CD-ROMs instead of physical film. Although this is a positive change in terms of costs, it also has its downsides. One of the main problems is how to integrate these CD-ROMs in the normal workflow. Although most CD-ROMs are equipped with a dedicated viewer, these viewers are different per vendor and thus users have to learn to operate many different software packages. Some of the software packages also require installation on the local workstation, which is not always allowed. To solve these problems, we have to include the data from the CD-ROMs into the normal hospital workflow. To achieve this, a beta-version of a software packages was evaluated (DICOM Litebox; ETIAM, France). This software package reads all DICOM files from the CD-ROM, reconciliates the DICOM header and stores the DICOM files into the PACS. A pilot in 77 CD-ROMs clearly showed the advantage of the procedure, all CD-ROMs were Successfully stored into the PACS and thus available using the normal viewers within the hospital, During the first half year after implementing procedure, changes were made only to store second opinion cases into the PACS, all other data was only stored on the web-server and will be removed after 2 years. (c) 2005 CARS & Elsevier B.V.. All rights reserved.
- Published
- 2005
7. Agreement of diameter- and volume-based pulmonary nodule management in CT lung cancer screening
- Author
-
Heuvelmans, MA, primary, Vliegenthart, R, additional, Horeweg, N, additional, de Jonge, GJ, additional, van Ooijen, PMA, additional, de Jong, PA, additional, Scholten, ETh, additional, de Bock, GH, additional, Mali, WPTM, additional, de Koning, HJ, additional, and Oudkerk, M, additional
- Published
- 2014
- Full Text
- View/download PDF
8. The influence of beam-pitch, reconstruction slice-thickness, and kernel on 3D visualisation of 16-MSCT data
- Author
-
Irwan, R, van Ooijen, PMA, Tukker, WGJ, Oudkerk, TM, Lemke, HU, Inamura, K, Doi, K, Vannier, MW, Farman, AG, and Reiber, JHC
- Subjects
reconstruction slice-thickness ,Signal processing ,business.industry ,Computer science ,Slice thickness ,3D reconstruction ,General Medicine ,kernels ,Imaging phantom ,Visualization ,multi-slice CT ,Multi slice ct ,Scan time ,Kernel (image processing) ,Computer vision ,Artificial intelligence ,business ,beam-pitch - Abstract
The latest technology 16-MSCT enables shorter rotation times of less than 500 ms and has therefore a much faster scan time than the previous ones. Although many studies on this topic have been done, evaluation and, more importantly, quantification of this MSCT from signal processing perspective, however, are still lacking. This paper presents quantification of the influence of the most important parameters of MSCT, namely the beam-pitch, reconstruction slice-thickness, and kernels on the 3 D reconstruction and visualisation. The results of our phantom study recommend the use of a beam-pitch of approximately I and a reconstruction slice-thickness of 1.65 mm to obtain optimal 3D reconstructions. Moreover, vessels smaller than 3 mm will be highly affected by any kernel and, therefore, become unreliable. (C) 2003 Published by Elsevier Science B.V.
- Published
- 2003
9. Comparison of contrast-enhanced magnetic resonance angiography and conventional pulmonary angiography for the diagnosis of pulmonary embolism: a prospective study
- Author
-
Oudkerk, M, van Beek, EJR, Wielopolski, P, van Ooijen, PMA, Brouwers-Kuyper, EMJ, Bongaerts, AHH, and Berghout, A
- Subjects
UTILITY ,COMPLICATIONS ,VEIN THROMBOSIS ,cardiovascular system ,HELICAL CT ,LUNG-SCAN ,cardiovascular diseases ,VENTILATION-PERFUSION SCINTIGRAPHY ,SPIRAL CT ,THERAPY ,eye diseases ,nervous system diseases ,circulatory and respiratory physiology - Abstract
Background Diagnostic strategies for pulmonary embolism are complex and consist of non-invasive diagnostic tests done to avoid conventional pulmonary angiography as much as possible. We aimed to assess the diagnostic accuracy of magnetic resonance angiography (MRA) for the diagnosis of pulmonary embolism, using conventional pulmonary angiography as a reference method. Methods In a prospective study, we enrolled 141 patients with suspected pulmonary embolism and an abnormal perfusion scan. Patients underwent MRA before conventional pulmonary angiography. Two reviewers, masked with respect to the results of conventional pulmonary angiography, assessed MRA images independently. Statistical analyses used chi(2) and 95% CI. Findings MRA was contraindicated in 13 patients (9%), and images were not interpretable in eight (6%). MRA was done in two patients in whom conventional pulmonary angiography was contraindicated. Thus, MRA and conventional pulmonary angiography results were available in 118 patients (84%). Prevalence of pulmonary embolism was 30%. Images were read independently in 115 patients, and agreement obtained in 105 (91%), kappa=0.75. MRA identified 27 of 35 patients with proven pulmonary embolism (sensitivity 77%, 95% CI 61-90). Sensitivity of MRA for isolated subsegmental, segmental, and central or lobar pulmonary embolism was 40%, 84%, and 100%, respectively (p Interpretation MRA is sensitive and specific for segmental or larger pulmonary embolism. Results are similar to those obtained with helical computed tomography, but MRA has safer contrast agents and does not involve ionising radiation. MRA could become part of the diagnostic strategy for pulmonary embolism.
- Published
- 2002
10. Demonstration of three-dimensional visualization techniques for coronary artery evaluation using EBCT - a case study
- Author
-
van Ooijen, PMA, de Feyter, PJ, Oudkerk, M, Lemke, HU, Vannier, MW, Inamura, K, Farman, AG, Doi, K, and Reiber, JH
- Subjects
3D visualization ,electron beam computed tomography ,coronary arteries - Published
- 2002
11. Use of a thin-section archive and Enterprise 3-Dimensional Software for long-term storage of thin-slice CT data sets -- a reviewers' response.
- Author
-
van Ooijen PMA, Broekema A, and Oudkerk M
- Abstract
Current developments in storage solutions, PACS, and client-server systems allow for 3D imaging at the desktop. This can be achieved together with full storage into PACS of all slices, including the very large thin-section CT datasets. This paper describes a possible setup, which has been in operation for several years now, in response to an article by Meenan et al. previously published in this journal (1). [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
12. Comparison of analog and digital preoperative planning in total hip and knee arthroplasties. A prospective study of 173 hips and 65 total knees.
- Author
-
The B, Diercks RL, van Ooijen PMA, and van Horn JR
- Abstract
INTRODUCTION: Digital correction of the magnification factor is expected to yield more accurate and reliable preoperative plans. We hypothesized that digital templating would be more accurate than manual templating for total hip and knee arthroplasties. PATIENTS AND METHODS: Firstly, we established the interobserver and intraobserver reliability of the templating procedure. The accuracy and reliability of digital and analog plans were measured in a series of 238 interventions, which were all planned using both techniques. RESULTS: Interobserver reliability was good for the planning of knee arthroplasties (e-values 0.63-0.75), but not more than moderate for the planning of hip arthroplasties (e-values 0.22-0.54). Analog plans of knee arthroplasties systematically underestimated the component sizes (1.1 size on average), while the digital procedure proved to be accurate (0.1-0.4 size too small on average). The following figures show percentage of cases receiving a correct implant, allowing an error of one size. Digital templating of the hip arthroplasty was less frequently correct (cemented cup and stem: 72% and 79%; uncemented cup and stem: 52% and 66%) than analog planning (cemented cup and stem: 73% and 89%; uncemented cup and stem: 64% and 52%). INTERPRETATION: Planning of component sizes for total knee arthroplasties is an accurate procedure when performed digitally. Our digital preoperative plans which were performed by someone other than the surgeon were less accurate than the analog plans prepared by the surgeon. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
13. Magnetic resonance imaging of the coronary arteries: anatomy of the coronary arteries and veins in three-dimensional imaging.
- Author
-
van Geuns RJM, Wielopolski PA, Rensing BJW, van Ooijen PMA, Oudkerk M, and de Feyter PJ
- Published
- 1999
14. Magnetic resonance imaging of the coronary arteries: clinical results from three dimensional evaluation of a respiratory gated technique.
- Author
-
van Geuns RJM, de Bruin HG, Rensing BJW, Wielopolski PA, Hulshoff MD, van Ooijen PMA, Oudkerk M, de Feyter PJ, van Geuns, R J, de Bruin, H G, Rensing, B J, Wielopolski, P A, Hulshoff, M D, van Ooijen, P M, Oudkerk, M, and de Feyter, P J
- Abstract
Background: Magnetic resonance coronary angiography is challenging because of the motion of the vessels during cardiac contraction and respiration. Additional challenges are the small calibre of the arteries and their complex three dimensional course. Respiratory gating, turboflash acquisition, and volume rendering techniques may meet the necessary requirements for appropriate visualisation.Objective: To determine the diagnostic accuracy of respiratory gated magnetic resonance imaging (MRI) for the detection of significant coronary artery stenoses evaluated with three dimensional postprocessing software.Methods: 32 patients referred for elective coronary angiography were studied with a retrospective respiratory gated three dimensional gradient echo MRI technique. Resolution was 1.9 x 1.25 x 2 mm. After manual segmentation three dimensional evaluation was performed with a volume rendering technique.Results: Overall 74% (range 50% to 90%) of the proximal and mid coronary artery segments were visualised with an image quality suitable for further analysis. Sensitivity and specificity for the detection of significant stenoses were 50% and 91%, respectively.Conclusions: Volume rendering of respiratory gated MRI techniques allows adequate visualisation of the coronary arteries in patients with a regular breathing pattern. Significant lesions in the major coronary artery branches can be identified with a moderate sensitivity and a high specificity. [ABSTRACT FROM AUTHOR]- Published
- 1999
15. In vivo assessment of three dimensional coronary anatomy using electron beam computed tomography after intravenous contrast administration.
- Author
-
Rensing BJ, Bongaerts AHH, van Geuns RJ, van Ooijen PMA, Oudkerk M, de Feyter PJ, Rensing, B J, Bongaerts, A H, van Geuns, R J, van Ooijen, P M, Oudkerk, M, and de Feyter, P J
- Abstract
Intravenous coronary angiography with electron beam computed tomography (EBCT) allows for the non-invasive visualisation of coronary arteries. With dedicated computer hardware and software, three dimensional renderings of the coronary arteries can be constructed, starting from the individual transaxial tomograms. This article describes image acquisition, postprocessing techniques, and the results of clinical studies. EBCT coronary angiography is a promising coronary artery imaging technique. Currently it is a reasonably robust technique for the visualisation and assessment of the left main and left anterior descending coronary artery. The right and circumflex coronary arteries can be visualised less consistently. Improvements in image acquisition and postprocessing techniques are expected to improve visualisation and diagnostic accuracy of the technique. [ABSTRACT FROM AUTHOR]
- Published
- 1999
16. Automatic two-dimensional & three-dimensional video analysis with deep learning for movement disorders: A systematic review.
- Author
-
Tang W, van Ooijen PMA, Sival DA, and Maurits NM
- Subjects
- Humans, Imaging, Three-Dimensional methods, Deep Learning, Movement Disorders diagnosis, Movement Disorders physiopathology, Video Recording
- Abstract
The advent of computer vision technology and increased usage of video cameras in clinical settings have facilitated advancements in movement disorder analysis. This review investigated these advancements in terms of providing practical, low-cost solutions for the diagnosis and analysis of movement disorders, such as Parkinson's disease, ataxia, dyskinesia, and Tourette syndrome. Traditional diagnostic methods for movement disorders are typically reliant on the subjective assessment of motor symptoms, which poses inherent challenges. Furthermore, early symptoms are often overlooked, and overlapping symptoms across diseases can complicate early diagnosis. Consequently, deep learning has been used for the objective video-based analysis of movement disorders. This study systematically reviewed the latest advancements in automatic two-dimensional & three-dimensional video analysis using deep learning for movement disorders. We comprehensively analyzed the literature published until September 2023 by searching the Web of Science, PubMed, Scopus, and Embase databases. We identified 68 relevant studies and extracted information on their objectives, datasets, modalities, and methodologies. The study aimed to identify, catalogue, and present the most significant advancements, offering a consolidated knowledge base on the role of video analysis and deep learning in movement disorder analysis. First, the objectives, including specific PD symptom quantification, ataxia assessment, cerebral palsy assessment, gait disorder analysis, tremor assessment, tic detection (in the context of Tourette syndrome), dystonia assessment, and abnormal movement recognition were discussed. Thereafter, the datasets used in the study were examined. Subsequently, video modalities and deep learning methodologies related to the topic were investigated. Finally, the challenges and opportunities in terms of datasets, interpretability, evaluation methods, and home/remote monitoring were discussed., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
17. Tackling heterogeneity in medical federated learning via aligning vision transformers.
- Author
-
Darzi E, Shen Y, Ou Y, Sijtsema NM, and van Ooijen PMA
- Subjects
- Humans, Machine Learning, Diagnostic Imaging, Lung Neoplasms diagnostic imaging
- Abstract
Federated learning enables training models on distributed, privacy-sensitive medical imaging data. However, data heterogeneity across participating institutions leads to reduced model performance and fairness issues, especially for underrepresented datasets. To address these challenges, we propose leveraging the multi-head attention mechanism in Vision Transformers to align the representations of heterogeneous data across clients. By focusing on the attention mechanism as the alignment objective, our approach aims to improve both the accuracy and fairness of federated learning models in medical imaging applications. We evaluate our method on the IQ-OTH/NCCD Lung Cancer dataset, simulating various levels of data heterogeneity using Latent Dirichlet Allocation (LDA). Our results demonstrate that our approach achieves competitive performance compared to state-of-the-art federated learning methods across different heterogeneity levels and improves the performance of models for underrepresented clients, promoting fairness in the federated learning setting. These findings highlight the potential of leveraging the multi-head attention mechanism to address the challenges of data heterogeneity in medical federated learning., Competing Interests: Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Erfan reports financial support was provided by Dutch Cancer Society. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 Elsevier B.V. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
18. Correction: Effect of emphysema on AI software and human reader performance in lung nodule detection from low-dose chest CT.
- Author
-
Sourlos N, Pelgrim G, Wisselink HJ, Yang X, de Jonge G, Rook M, Prokop M, Sidorenkov G, van Tuinen M, Vliegenthart R, and van Ooijen PMA
- Published
- 2024
- Full Text
- View/download PDF
19. Three-Dimensional Deep Learning Normal Tissue Complication Probability Model to Predict Late Xerostomia in Patients With Head and Neck Cancer.
- Author
-
Chu H, de Vette SPM, Neh H, Sijtsema NM, Steenbakkers RJHM, Moreno A, Langendijk JA, van Ooijen PMA, Fuller CD, and van Dijk LV
- Abstract
Purpose: Conventional normal tissue complication probability (NTCP) models for patients with head and neck cancer are typically based on single-value variables, which, for radiation-induced xerostomia, are baseline xerostomia and mean salivary gland doses. This study aimed to improve the prediction of late xerostomia by using 3-dimensional information from radiation dose distributions, computed tomography imaging, organ-at-risk segmentations, and clinical variables with deep learning (DL)., Methods and Materials: An international cohort of 1208 patients with head and neck cancer from 2 institutes was used to train and twice validate DL models (deep convolutional neural network, EfficientNet-v2, and ResNet) with 3-dimensional dose distribution, computed tomography scan, organ-at-risk segmentations, baseline xerostomia score, sex, and age as input. The NTCP endpoint was moderate-to-severe xerostomia 12 months postradiation therapy. The DL models' prediction performance was compared with a reference model: a recently published xerostomia NTCP model that used baseline xerostomia score and mean salivary gland doses as input. Attention maps were created to visualize the focus regions of the DL predictions. Transfer learning was conducted to improve the DL model performance on the external validation set., Results: All DL-based NTCP models showed better performance (area under the receiver operating characteristic curve [AUC]
test , 0.78-0.79) than the reference NTCP model (AUCtest , 0.74) in the independent test. Attention maps showed that the DL model focused on the major salivary glands, particularly the stem cell-rich region of the parotid glands. DL models obtained lower external validation performance (AUCexternal , 0.63) than the reference model (AUCexternal , 0.66). After transfer learning on a small external subset, the DL model (AUCtl, external , 0.66) performed better than the reference model (AUCtl, external , 0.64)., Conclusion: DL-based NTCP models performed better than the reference model when validated in data from the same institute. Improved performance in the external data set was achieved with transfer learning, demonstrating the need for multicenter training data to realize generalizable DL-based NTCP models., (Copyright © 2024 The Author(s). Published by Elsevier Inc. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
20. PET/CT based transformer model for multi-outcome prediction in oropharyngeal cancer.
- Author
-
Ma B, Guo J, De Biase A, van Dijk LV, van Ooijen PMA, Langendijk JA, Both S, and Sijtsema NM
- Subjects
- Humans, Male, Female, Middle Aged, Aged, Neural Networks, Computer, Adult, Oropharyngeal Neoplasms mortality, Oropharyngeal Neoplasms diagnostic imaging, Oropharyngeal Neoplasms pathology, Oropharyngeal Neoplasms radiotherapy, Oropharyngeal Neoplasms therapy, Positron Emission Tomography Computed Tomography methods
- Abstract
Background and Purpose: To optimize our previously proposed TransRP, a model integrating CNN (convolutional neural network) and ViT (Vision Transformer) designed for recurrence-free survival prediction in oropharyngeal cancer and to extend its application to the prediction of multiple clinical outcomes, including locoregional control (LRC), Distant metastasis-free survival (DMFS) and overall survival (OS)., Materials and Methods: Data was collected from 400 patients (300 for training and 100 for testing) diagnosed with oropharyngeal squamous cell carcinoma (OPSCC) who underwent (chemo)radiotherapy at University Medical Center Groningen. Each patient's data comprised pre-treatment PET/CT scans, clinical parameters, and clinical outcome endpoints, namely LRC, DMFS and OS. The prediction performance of TransRP was compared with CNNs when inputting image data only. Additionally, three distinct methods (m1-3) of incorporating clinical predictors into TransRP training and one method (m4) that uses TransRP prediction as one parameter in a clinical Cox model were compared., Results: TransRP achieved higher test C-index values of 0.61, 0.84 and 0.70 than CNNs for LRC, DMFS and OS, respectively. Furthermore, when incorporating TransRP's prediction into a clinical Cox model (m4), a higher C-index of 0.77 for OS was obtained. Compared with a clinical routine risk stratification model of OS, our model, using clinical variables, radiomics and TransRP prediction as predictors, achieved larger separations of survival curves between low, intermediate and high risk groups., Conclusion: TransRP outperformed CNN models for all endpoints. Combining clinical data and TransRP prediction in a Cox model achieved better OS prediction., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024. Published by Elsevier B.V.)
- Published
- 2024
- Full Text
- View/download PDF
21. Automated Breast Density Assessment in MRI Using Deep Learning and Radiomics: Strategies for Reducing Inter-Observer Variability.
- Author
-
Jing X, Wielema M, Monroy-Gonzalez AG, Stams TRG, Mahesh SVK, Oudkerk M, Sijens PE, Dorrius MD, and van Ooijen PMA
- Subjects
- Humans, Female, Retrospective Studies, Middle Aged, Adult, Breast Neoplasms diagnostic imaging, Reproducibility of Results, Aged, Image Interpretation, Computer-Assisted methods, Feasibility Studies, Image Processing, Computer-Assisted methods, ROC Curve, Radiomics, Deep Learning, Observer Variation, Magnetic Resonance Imaging methods, Breast Density, Breast diagnostic imaging
- Abstract
Background: Accurate breast density evaluation allows for more precise risk estimation but suffers from high inter-observer variability., Purpose: To evaluate the feasibility of reducing inter-observer variability of breast density assessment through artificial intelligence (AI) assisted interpretation., Study Type: Retrospective., Population: Six hundred and twenty-one patients without breast prosthesis or reconstructions were randomly divided into training (N = 377), validation (N = 98), and independent test (N = 146) datasets., Field Strength/sequence: 1.5 T and 3.0 T; T1-weighted spectral attenuated inversion recovery., Assessment: Five radiologists independently assessed each scan in the independent test set to establish the inter-observer variability baseline and to reach a reference standard. Deep learning and three radiomics models were developed for three classification tasks: (i) four Breast Imaging-Reporting and Data System (BI-RADS) breast composition categories (A-D), (ii) dense (categories C, D) vs. non-dense (categories A, B), and (iii) extremely dense (category D) vs. moderately dense (categories A-C). The models were tested against the reference standard on the independent test set. AI-assisted interpretation was performed by majority voting between the models and each radiologist's assessment., Statistical Tests: Inter-observer variability was assessed using linear-weighted kappa (κ) statistics. Kappa statistics, accuracy, and area under the receiver operating characteristic curve (AUC) were used to assess models against reference standard., Results: In the independent test set, five readers showed an overall substantial agreement on tasks (i) and (ii), but moderate agreement for task (iii). The best-performing model showed substantial agreement with reference standard for tasks (i) and (ii), but moderate agreement for task (iii). With the assistance of the AI models, almost perfect inter-observer variability was obtained for tasks (i) (mean κ = 0.86), (ii) (mean κ = 0.94), and (iii) (mean κ = 0.94)., Data Conclusion: Deep learning and radiomics models have the potential to help reduce inter-observer variability of breast density assessment., Level of Evidence: 3 TECHNICAL EFFICACY: Stage 1., (© 2023 The Authors. Journal of Magnetic Resonance Imaging published by Wiley Periodicals LLC on behalf of International Society for Magnetic Resonance in Medicine.)
- Published
- 2024
- Full Text
- View/download PDF
22. Effect of emphysema on AI software and human reader performance in lung nodule detection from low-dose chest CT.
- Author
-
Sourlos N, Pelgrim G, Wisselink HJ, Yang X, de Jonge G, Rook M, Prokop M, Sidorenkov G, van Tuinen M, Vliegenthart R, and van Ooijen PMA
- Subjects
- Humans, Male, Middle Aged, Female, Software, Sensitivity and Specificity, Lung Neoplasms diagnostic imaging, Aged, Radiation Dosage, Solitary Pulmonary Nodule diagnostic imaging, Radiographic Image Interpretation, Computer-Assisted methods, Tomography, X-Ray Computed methods, Artificial Intelligence, Pulmonary Emphysema diagnostic imaging
- Abstract
Background: Emphysema influences the appearance of lung tissue in computed tomography (CT). We evaluated whether this affects lung nodule detection by artificial intelligence (AI) and human readers (HR)., Methods: Individuals were selected from the "Lifelines" cohort who had undergone low-dose chest CT. Nodules in individuals without emphysema were matched to similar-sized nodules in individuals with at least moderate emphysema. AI results for nodular findings of 30-100 mm
3 and 101-300 mm3 were compared to those of HR; two expert radiologists blindly reviewed discrepancies. Sensitivity and false positives (FPs)/scan were compared for emphysema and non-emphysema groups., Results: Thirty-nine participants with and 82 without emphysema were included (n = 121, aged 61 ± 8 years (mean ± standard deviation), 58/121 males (47.9%)). AI and HR detected 196 and 206 nodular findings, respectively, yielding 109 concordant nodules and 184 discrepancies, including 118 true nodules. For AI, sensitivity was 0.68 (95% confidence interval 0.57-0.77) in emphysema versus 0.71 (0.62-0.78) in non-emphysema, with FPs/scan 0.51 and 0.22, respectively (p = 0.028). For HR, sensitivity was 0.76 (0.65-0.84) and 0.80 (0.72-0.86), with FPs/scan of 0.15 and 0.27 (p = 0.230). Overall sensitivity was slightly higher for HR than for AI, but this difference disappeared after the exclusion of benign lymph nodes. FPs/scan were higher for AI in emphysema than in non-emphysema (p = 0.028), while FPs/scan for HR were higher than AI for 30-100 mm3 nodules in non-emphysema (p = 0.009)., Conclusions: AI resulted in more FPs/scan in emphysema compared to non-emphysema, a difference not observed for HR., Relevance Statement: In the creation of a benchmark dataset to validate AI software for lung nodule detection, the inclusion of emphysema cases is important due to the additional number of FPs., Key Points: • The sensitivity of nodule detection by AI was similar in emphysema and non-emphysema. • AI had more FPs/scan in emphysema compared to non-emphysema. • Sensitivity and FPs/scan by the human reader were comparable for emphysema and non-emphysema. • Emphysema and non-emphysema representation in benchmark dataset is important for validating AI., (© 2024. The Author(s).)- Published
- 2024
- Full Text
- View/download PDF
23. Localization of contrast-enhanced breast lesions in ultrafast screening MRI using deep convolutional neural networks.
- Author
-
Jing X, Dorrius MD, Zheng S, Wielema M, Oudkerk M, Sijens PE, and van Ooijen PMA
- Subjects
- Female, Humans, Breast pathology, Magnetic Resonance Imaging methods, Neural Networks, Computer, Time, Contrast Media pharmacology, Breast Neoplasms diagnostic imaging, Breast Neoplasms pathology
- Abstract
Objectives: To develop a deep learning-based method for contrast-enhanced breast lesion detection in ultrafast screening MRI., Materials and Methods: A total of 837 breast MRI exams of 488 consecutive patients were included. Lesion's location was independently annotated in the maximum intensity projection (MIP) image of the last time-resolved angiography with stochastic trajectories (TWIST) sequence for each individual breast, resulting in 265 lesions (190 benign, 75 malignant) in 163 breasts (133 women). YOLOv5 models were fine-tuned using training sets containing the same number of MIP images with and without lesions. A long short-term memory (LSTM) network was employed to help reduce false positive predictions. The integrated system was then evaluated on test sets containing enriched uninvolved breasts during cross-validation to mimic the performance in a screening scenario., Results: In five-fold cross-validation, the YOLOv5x model showed a sensitivity of 0.95, 0.97, 0.98, and 0.99, with 0.125, 0.25, 0.5, and 1 false positive per breast, respectively. The LSTM network reduced 15.5% of the false positive prediction from the YOLO model, and the positive predictive value was increased from 0.22 to 0.25., Conclusions: A fine-tuned YOLOv5x model can detect breast lesions on ultrafast MRI with high sensitivity in a screening population, and the output of the model could be further refined by an LSTM network to reduce the amount of false positive predictions., Clinical Relevance Statement: The proposed integrated system would make the ultrafast MRI screening process more effective by assisting radiologists in prioritizing suspicious examinations and supporting the diagnostic workup., Key Points: • Deep convolutional neural networks could be utilized to automatically pinpoint breast lesions in screening MRI with high sensitivity. • False positive predictions significantly increased when the detection models were tested on highly unbalanced test sets with more normal scans. • Dynamic enhancement patterns of breast lesions during contrast inflow learned by the long short-term memory networks helped to reduce false positive predictions., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
24. A comparative study of federated learning methods for COVID-19 detection.
- Author
-
Darzi E, Sijtsema NM, and van Ooijen PMA
- Subjects
- Humans, Algorithms, Hospitals, Neural Networks, Computer, Privacy, COVID-19 diagnosis
- Abstract
Deep learning has proven to be highly effective in diagnosing COVID-19; however, its efficacy is contingent upon the availability of extensive data for model training. The data sharing among hospitals, which is crucial for training robust models, is often restricted by privacy regulations. Federated learning (FL) emerges as a solution by enabling model training across multiple hospitals while preserving data privacy. However, the deployment of FL can be resource-intensive, necessitating efficient utilization of computational and network resources. In this study, we evaluate the performance and resource efficiency of five FL algorithms in the context of COVID-19 detection using Convolutional Neural Networks (CNNs) in a decentralized setting. The evaluation involves varying the number of participating entities, the number of federated rounds, and the selection algorithms. Our findings indicate that the Cyclic Weight Transfer algorithm exhibits superior performance, particularly when the number of participating hospitals is limited. These insights hold practical implications for the deployment of FL algorithms in COVID-19 detection and broader medical image analysis., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
25. Deep learning-based outcome prediction using PET/CT and automatically predicted probability maps of primary tumor in patients with oropharyngeal cancer.
- Author
-
De Biase A, Ma B, Guo J, van Dijk LV, Langendijk JA, Both S, van Ooijen PMA, and Sijtsema NM
- Subjects
- Humans, Positron Emission Tomography Computed Tomography methods, Prognosis, Deep Learning, Oropharyngeal Neoplasms diagnostic imaging, Head and Neck Neoplasms
- Abstract
Background and Objective: Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall survival (OS) using pre-treatment imaging in head and neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used as an additional channel in the input to DL algorithms to improve model performance. However, the binary segmentation mask of the GTVp directs the focus of the network to the defined tumor region only and uniformly. DL models trained for tumor segmentation have also been used to generate predicted tumor probability maps (TPM) where each pixel value corresponds to the degree of certainty of that pixel to be classified as tumor. The aim of this study was to explore the effect of using TPM as an extra input channel of CT- and PET-based DL prediction models for oropharyngeal cancer (OPC) patients in terms of local control (LC), regional control (RC), DMFS and OS., Methods: We included 399 OPC patients from our institute that were treated with definitive (chemo)radiation. For each patient, CT and PET scans and GTVp contours, used for radiotherapy treatment planning, were collected. We first trained a previously developed 2.5D DL framework for tumor probability prediction by 5-fold cross validation using 131 patients. Then, a 3D ResNet18 was trained for outcome prediction using the 3D TPM as one of the possible inputs. The endpoints were LC, RC, DMFS, and OS. We performed 3-fold cross validation on 168 patients for each endpoint using different combinations of image modalities as input. The final prediction in the test set (100) was obtained by averaging the predictions of the 3-fold models. The C-index was used to evaluate the discriminative performance of the models., Results: The models trained replacing the GTVp contours with the TPM achieved the highest C-indexes for LC (0.74) and RC (0.60) prediction. For OS, using the TPM or the GTVp as additional image modality resulted in comparable C-indexes (0.72 and 0.74)., Conclusions: Adding predicted TPMs instead of GTVp contours as an additional input channel for DL-based outcome prediction models improved model performance for LC and RC., Competing Interests: Declaration of Competing Interest The authors declare no conflict of interest., (Copyright © 2023. Published by Elsevier B.V.)
- Published
- 2024
- Full Text
- View/download PDF
26. A framework to integrate artificial intelligence training into radiology residency programs: preparing the future radiologist.
- Author
-
van Kooten MJ, Tan CO, Hofmeijer EIS, van Ooijen PMA, Noordzij W, Lamers MJ, Kwee TC, Vliegenthart R, and Yakar D
- Abstract
Objectives: To present a framework to develop and implement a fast-track artificial intelligence (AI) curriculum into an existing radiology residency program, with the potential to prepare a new generation of AI conscious radiologists., Methods: The AI-curriculum framework comprises five sequential steps: (1) forming a team of AI experts, (2) assessing the residents' knowledge level and needs, (3) defining learning objectives, (4) matching these objectives with effective teaching strategies, and finally (5) implementing and evaluating the pilot. Following these steps, a multidisciplinary team of AI engineers, radiologists, and radiology residents designed a 3-day program, including didactic lectures, hands-on laboratory sessions, and group discussions with experts to enhance AI understanding. Pre- and post-curriculum surveys were conducted to assess participants' expectations and progress and were analyzed using a Wilcoxon rank-sum test., Results: There was 100% response rate to the pre- and post-curriculum survey (17 and 12 respondents, respectively). Participants' confidence in their knowledge and understanding of AI in radiology significantly increased after completing the program (pre-curriculum means 3.25 ± 1.48 (SD), post-curriculum means 6.5 ± 0.90 (SD), p-value = 0.002). A total of 75% confirmed that the course addressed topics that were applicable to their work in radiology. Lectures on the fundamentals of AI and group discussions with experts were deemed most useful., Conclusion: Designing an AI curriculum for radiology residents and implementing it into a radiology residency program is feasible using the framework presented. The 3-day AI curriculum effectively increased participants' perception of knowledge and skills about AI in radiology and can serve as a starting point for further customization., Critical Relevance Statement: The framework provides guidance for developing and implementing an AI curriculum in radiology residency programs, educating residents on the application of AI in radiology and ultimately contributing to future high-quality, safe, and effective patient care., Key Points: • AI education is necessary to prepare a new generation of AI-conscious radiologists. • The AI curriculum increased participants' perception of AI knowledge and skills in radiology. • This five-step framework can assist integrating AI education into radiology residency programs., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
27. Masked Conditional Variational Autoencoders for Chromosome Straightening.
- Author
-
Li J, Zheng S, Shui Z, Zhang S, Yang L, Sun Y, Zhang Y, Li H, Ye Y, van Ooijen PMA, Li K, and Yang L
- Subjects
- Humans, Karyotyping, Chromosome Banding, Chromosomes, Algorithms
- Abstract
Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.
- Published
- 2024
- Full Text
- View/download PDF
28. Comparison of computed tomography image features extracted by radiomics, self-supervised learning and end-to-end deep learning for outcome prediction of oropharyngeal cancer.
- Author
-
Ma B, Guo J, Chu H, van Dijk LV, van Ooijen PMA, Langendijk JA, Both S, and Sijtsema NM
- Abstract
Background and Purpose: To compare the prediction performance of image features of computed tomography (CT) images extracted by radiomics, self-supervised learning and end-to-end deep learning for local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), tumor-specific survival (TSS), overall survival (OS) and disease-free survival (DFS) of oropharyngeal squamous cell carcinoma (OPSCC) patients after (chemo)radiotherapy., Methods and Materials: The OPC-Radiomics dataset was used for model development and independent internal testing and the UMCG-OPC set for external testing. Image features were extracted from the Gross Tumor Volume contours of the primary tumor (GTVt) regions in CT scans when using radiomics or a self-supervised learning-based method (autoencoder). Clinical and combined (radiomics, autoencoder or end-to-end) models were built using multivariable Cox proportional-hazard analysis with clinical features only and both clinical and image features for LC, RC, LRC, DMFS, TSS, OS and DFS prediction, respectively., Results: In the internal test set, combined autoencoder models performed better than clinical models and combined radiomics models for LC, RC, LRC, DMFS, TSS and DFS prediction (largest improvements in C-index: 0.91 vs. 0.76 in RC and 0.74 vs. 0.60 in DMFS). In the external test set, combined radiomics models performed better than clinical and combined autoencoder models for all endpoints (largest improvements in LC, 0.82 vs. 0.71). Furthermore, combined models performed better in risk stratification than clinical models and showed good calibration for most endpoints., Conclusions: Image features extracted using self-supervised learning showed best internal prediction performance while radiomics features have better external generalizability., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2023 The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
29. CT-based deep multi-label learning prediction model for outcome in patients with oropharyngeal squamous cell carcinoma.
- Author
-
Ma B, Guo J, Zhai TT, van der Schaaf A, Steenbakkers RJHM, van Dijk LV, Both S, Langendijk JA, Zhang W, Qiu B, van Ooijen PMA, and Sijtsema NM
- Subjects
- Humans, Squamous Cell Carcinoma of Head and Neck, Tomography, X-Ray Computed, Disease-Free Survival, Retrospective Studies, Carcinoma, Squamous Cell diagnostic imaging, Carcinoma, Squamous Cell therapy, Head and Neck Neoplasms, Oropharyngeal Neoplasms diagnostic imaging, Oropharyngeal Neoplasms therapy
- Abstract
Background: Personalized treatment is increasingly required for oropharyngeal squamous cell carcinoma (OPSCC) patients due to emerging new cancer subtypes and treatment options. Outcome prediction model can help identify low or high-risk patients who may be suitable to receive de-escalation or intensified treatment approaches., Purpose: To develop a deep learning (DL)-based model for predicting multiple and associated efficacy endpoints in OPSCC patients based on computed tomography (CT)., Methods: Two patient cohorts were used in this study: a development cohort consisting of 524 OPSCC patients (70% for training and 30% for independent testing) and an external test cohort of 396 patients. Pre-treatment CT-scans with the gross primary tumor volume contours (GTVt) and clinical parameters were available to predict endpoints, including 2-year local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), disease-specific survival (DSS), overall survival (OS), and disease-free survival (DFS). We proposed DL outcome prediction models with the multi-label learning (MLL) strategy that integrates the associations of different endpoints based on clinical factors and CT-scans., Results: The multi-label learning models outperformed the models that were developed based on a single endpoint for all endpoints especially with high AUCs ≥ 0.80 for 2-year RC, DMFS, DSS, OS, and DFS in the internal independent test set and for all endpoints except 2-year LRC in the external test set. Furthermore, with the models developed, patients could be stratified into high and low-risk groups that were significantly different for all endpoints in the internal test set and for all endpoints except DMFS in the external test set., Conclusion: MLL models demonstrated better discriminative ability for all 2-year efficacy endpoints than single outcome models in the internal test and for all endpoints except LRC in the external set., (© 2023 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)
- Published
- 2023
- Full Text
- View/download PDF
30. Survival prediction for stage I-IIIA non-small cell lung cancer using deep learning.
- Author
-
Zheng S, Guo J, Langendijk JA, Both S, Veldhuis RNJ, Oudkerk M, van Ooijen PMA, Wijsman R, and Sijtsema NM
- Subjects
- Humans, Neoplasm Staging, Tomography, X-Ray Computed methods, Retrospective Studies, Carcinoma, Non-Small-Cell Lung therapy, Carcinoma, Non-Small-Cell Lung drug therapy, Lung Neoplasms pathology, Deep Learning
- Abstract
Background and Purpose: The aim of this study was to develop and evaluate a prediction model for 2-year overall survival (OS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients who received definitive radiotherapy by considering clinical variables and image features from pre-treatment CT-scans., Materials and Methods: NSCLC patients who received stereotactic radiotherapy were prospectively collected at the UMCG and split into a training and a hold out test set including 189 and 81 patients, respectively. External validation was performed on 228 NSCLC patients who were treated with radiation or concurrent chemoradiation at the Maastro clinic (Lung1 dataset). A hybrid model that integrated both image and clinical features was implemented using deep learning. Image features were learned from cubic patches containing lung tumours extracted from pre-treatment CT scans. Relevant clinical variables were selected by univariable and multivariable analyses., Results: Multivariable analysis showed that age and clinical stage were significant prognostic clinical factors for 2-year OS. Using these two clinical variables in combination with image features from pre-treatment CT scans, the hybrid model achieved a median AUC of 0.76 [95 % CI: 0.65-0.86] and 0.64 [95 % CI: 0.58-0.70] on the complete UMCG and Maastro test sets, respectively. The Kaplan-Meier survival curves showed significant separation between low and high mortality risk groups on these two test sets (log-rank test: p-value < 0.001, p-value = 0.012, respectively) CONCLUSION: We demonstrated that a hybrid model could achieve reasonable performance by utilizing both clinical and image features for 2-year OS prediction. Such a model has the potential to identify patients with high mortality risk and guide clinical decision making., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2023 The Author(s). Published by Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
31. Deep learning aided oropharyngeal cancer segmentation with adaptive thresholding for predicted tumor probability in FDG PET and CT images.
- Author
-
De Biase A, Sijtsema NM, van Dijk LV, Langendijk JA, and van Ooijen PMA
- Subjects
- Humans, Fluorodeoxyglucose F18, Positron Emission Tomography Computed Tomography, Tomography, X-Ray Computed methods, Probability, Image Processing, Computer-Assisted methods, Head and Neck Neoplasms, Deep Learning, Oropharyngeal Neoplasms
- Abstract
Objective. Tumor segmentation is a fundamental step for radiotherapy treatment planning. To define an accurate segmentation of the primary tumor (GTVp) of oropharyngeal cancer patients (OPC) each image volume is explored slice-by-slice from different orientations on different image modalities. However, the manual fixed boundary of segmentation neglects the spatial uncertainty known to occur in tumor delineation. This study proposes a novel deep learning-based method that generates probability maps which capture the model uncertainty in the segmentation task. Approach. We included 138 OPC patients treated with (chemo)radiation in our institute. Sequences of 3 consecutive 2D slices of concatenated FDG-PET/CT images and GTVp contours were used as input. Our framework exploits inter and intra-slice context using attention mechanisms and bi-directional long short term memory (Bi-LSTM). Each slice resulted in three predictions that were averaged. A 3-fold cross validation was performed on sequences extracted from the axial, sagittal, and coronal plane. 3D volumes were reconstructed and single- and multi-view ensembling were performed to obtain final results. The output is a tumor probability map determined by averaging multiple predictions. Main Results. Model performance was assessed on 25 patients at different probability thresholds. Predictions were the closest to the GTVp at a threshold of 0.9 (mean surface DSC of 0.81, median HD
95 of 3.906 mm). Significance. The promising results of the proposed method show that is it possible to offer the probability maps to radiation oncologists to guide them in a in a slice-by-slice adaptive GTVp segmentation., (Creative Commons Attribution license.)- Published
- 2023
- Full Text
- View/download PDF
32. Pre-screening to guide coronary artery calcium scoring for early identification of high-risk individuals in the general population.
- Author
-
Ties D, van der Ende YM, Pundziute G, van der Schouw YT, Bots ML, Xia C, van Ooijen PMA, Pelgrim GJ, Vliegenthart R, and van der Harst P
- Subjects
- Humans, Calcium, Coronary Angiography methods, Risk Assessment, Risk Factors, Predictive Value of Tests, Coronary Artery Disease epidemiology
- Abstract
Aims: To evaluate the ability of Systematic COronary Risk Estimation 2 (SCORE2) and other pre-screening methods to identify individuals with high coronary artery calcium score (CACS) in the general population., Methods and Results: Computed tomography-based CACS quantification was performed in 6530 individuals aged 45 years or older from the general population. Various pre-screening methods to guide referral for CACS were evaluated. Miss rates for high CACS (CACS ≥300 and ≥100) were evaluated for various pre-screening methods: moderate (≥5%) and high (≥10%) SCORE2 risk, any traditional coronary artery disease (CAD) risk factor, any Risk Or Benefit IN Screening for CArdiovascular Disease (ROBINSCA) risk factor, and moderately (>3 mg/24 h) increased urine albumin excretion (UAE). Out of 6530 participants, 643 (9.8%) had CACS ≥300 and 1236 (18.9%) had CACS ≥100. For CACS ≥300 and CACS ≥100, miss rate was 32 and 41% for pre-screening by moderate (≥5%) SCORE2 risk and 81 and 87% for high (≥10%) SCORE2 risk, respectively. For CACS ≥300 and CACS ≥100, miss rate was 8 and 11% for pre-screening by at least one CAD risk factor, 24 and 25% for at least one ROBINSCA risk factor, and 67 and 67% for moderately increased UAE, respectively., Conclusion: Many individuals with high CACS in the general population are left unidentified when only performing CACS in case of at least moderate (≥5%) SCORE2, which closely resembles current clinical practice. Less stringent pre-screening by presence of at least one CAD risk factor to guide CACS identifies more individuals with high CACS and could improve CAD prevention., Competing Interests: Conflict of interest: R.V. reports receiving grant support by Siemens Healthineers and Ministry of Economic Affairs and Climate Policy and lecture fees by Siemens Healthineers and Bayer, and P.V.D.H. receiving grant support by Siemens Healthineers and Guerbet. The other authors did not report any potential conflict of interest relevant to this article., (© The Author(s) 2022. Published by Oxford University Press on behalf of the European Society of Cardiology.)
- Published
- 2022
- Full Text
- View/download PDF
33. Using deep learning to safely exclude lesions with only ultrafast breast MRI to shorten acquisition and reading time.
- Author
-
Jing X, Wielema M, Cornelissen LJ, van Gent M, Iwema WM, Zheng S, Sijens PE, Oudkerk M, Dorrius MD, and van Ooijen PMA
- Subjects
- Humans, Female, Adult, Artificial Intelligence, Retrospective Studies, Breast diagnostic imaging, Breast pathology, Magnetic Resonance Imaging methods, Deep Learning, Breast Neoplasms diagnostic imaging, Breast Neoplasms pathology
- Abstract
Objectives: To investigate the feasibility of automatically identifying normal scans in ultrafast breast MRI with artificial intelligence (AI) to increase efficiency and reduce workload., Methods: In this retrospective analysis, 837 breast MRI examinations performed on 438 women from April 2016 to October 2019 were included. The left and right breasts in each examination were labelled normal (without suspicious lesions) or abnormal (with suspicious lesions) based on final interpretation. Maximum intensity projection (MIP) images of each breast were then used to train a deep learning model. A high sensitivity threshold was calculated based on the detection trade - off (DET) curve on the validation set. The performance of the model was evaluated by receiver operating characteristic analysis of the independent test set. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with the high sensitivity threshold were calculated., Results: The independent test set consisted of 178 examinations of 149 patients (mean age, 44 years ± 14 [standard deviation]). The trained model achieved an AUC of 0.81 (95% CI: 0.75-0.88) on the independent test set. Applying a threshold of 0.25 yielded a sensitivity of 98% (95% CI: 90%; 100%), an NPV of 98% (95% CI: 89%; 100%), a workload reduction of 15.7%, and a scan time reduction of 16.6%., Conclusion: This deep learning model has a high potential to help identify normal scans in ultrafast breast MRI and thereby reduce radiologists' workload and scan time., Key Points: • Deep learning in TWIST may eliminate the necessity of additional sequences for identifying normal breasts during MRI screening. • Workload and scanning time reductions of 15.7% and 16.6%, respectively, could be achieved with the cost of 1 (1 of 55) false negative prediction., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
34. Standardization of Artificial Intelligence Development in Radiotherapy.
- Author
-
de Biase A, Sourlos N, and van Ooijen PMA
- Subjects
- Humans, Machine Learning, Reference Standards, Artificial Intelligence, Radiation Oncology
- Abstract
Application of Artificial Intelligence (AI) tools has recently gained interest in the fields of medical imaging and radiotherapy. Even though there have been many papers published in these domains in the last few years, clinical assessment of the proposed AI methods is limited due to the lack of standardized protocols that can be used to validate the performance of the developed tools. Moreover, each stakeholder uses their own methods, tools, and evaluation criteria. Communication between different stakeholders is limited or absent, which makes it hard to easily exchange models between different clinics. These issues are not limited to radiotherapy but exist in every AI application domain. To deal with these issues, methods like the Machine Learning Canvas, Datasets for Datasheets, and Model cards have been developed. They aim to provide information of the whole creation pipeline of AI solutions, of the datasets used to develop AI, along with their biases, as well as to facilitate easier collaboration/communication between different stakeholders and facilitate the clinical introduction of AI. This work introduces the concepts of these 3 open-source solutions including the author's experiences applying them to AI applications for radiotherapy., (Copyright © 2022 The Author(s). Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
35. Facilitating standardized COVID-19 suspicion prediction based on computed tomography radiomics in a multi-demographic setting.
- Author
-
Nagaraj Y, de Jonge G, Andreychenko A, Presti G, Fink MA, Pavlov N, Quattrocchi CC, Morozov S, Veldhuis R, Oudkerk M, and van Ooijen PMA
- Subjects
- Adolescent, Adult, Aged, Aged, 80 and over, Demography, Humans, Middle Aged, Retrospective Studies, Tomography, X-Ray Computed methods, Young Adult, COVID-19, Pneumonia
- Abstract
Objective: To develop an automatic COVID-19 Reporting and Data System (CO-RADS)-based classification in a multi-demographic setting., Methods: This multi-institutional review boards-approved retrospective study included 2720 chest CT scans (mean age, 58 years [range 18-100 years]) from Italian and Russian patients. Three board-certified radiologists from three countries assessed randomly selected subcohorts from each population and provided CO-RADS-based annotations. CT radiomic features were extracted from the selected subcohorts after preprocessing steps like lung lobe segmentation and automatic noise reduction. We compared three machine learning models, logistic regression (LR), multilayer perceptron (MLP), and random forest (RF) for the automated CO-RADS classification. Model evaluation was carried out in two scenarios, first, training on a mixed multi-demographic subcohort and testing on an independent hold-out dataset. In the second scenario, training was done on a single demography and externally validated on the other demography., Results: The overall inter-observer agreement for the CO-RADS scoring between the radiologists was substantial (k = 0.80). Irrespective of the type of validation test scenario, suspected COVID-19 CT scans were identified with an accuracy of 84%. SHapley Additive exPlanations (SHAP) interpretation showed that the "wavelet_(LH)_GLCM_Imc1" feature had a positive impact on COVID prediction both with and without noise reduction. The application of noise reduction improved the overall performance between the classifiers for all types., Conclusion: Using an automated model based on the COVID-19 Reporting and Data System (CO-RADS), we achieved clinically acceptable performance in a multi-demographic setting. This approach can serve as a standardized tool for automated COVID-19 assessment., Keypoints: • Automatic CO-RADS scoring of large-scale multi-demographic chest CTs with mean AUC of 0.93 ± 0.04. • Validation procedure resembles TRIPOD 2b and 3 categories, enhancing the quality of experimental design to test the cross-dataset domain shift between institutions aiding clinical integration. • Identification of COVID-19 pneumonia in the presence of community-acquired pneumonia and other comorbidities with an AUC of 0.92., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
36. Federated Learning in Medical Imaging: Part II: Methods, Challenges, and Considerations.
- Author
-
Darzidehkalani E, Ghasemi-Rad M, and van Ooijen PMA
- Subjects
- Algorithms, Humans, Neural Networks, Computer, Machine Learning, Privacy
- Abstract
Federated learning is a machine learning method that allows decentralized training of deep neural networks among multiple clients while preserving the privacy of each client's data. Federated learning is instrumental in medical imaging because of the privacy considerations of medical data. Setting up federated networks in hospitals comes with unique challenges, primarily because medical imaging data and federated learning algorithms each have their own set of distinct characteristics. This article introduces federated learning algorithms in medical imaging and discusses technical challenges and considerations of real-world implementation of them., (Copyright © 2022 American College of Radiology. Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
37. Federated Learning in Medical Imaging: Part I: Toward Multicentral Health Care Ecosystems.
- Author
-
Darzidehkalani E, Ghasemi-Rad M, and van Ooijen PMA
- Subjects
- Algorithms, Delivery of Health Care, Privacy, Ecosystem, Machine Learning
- Abstract
With recent developments in medical imaging facilities, extensive medical imaging data are produced every day. This increasing amount of data provides an opportunity for researchers to develop data-driven methods and deliver better health care. However, data-driven models require a large amount of data to be adequately trained. Furthermore, there is always a limited amount of data available in each data center. Hence, deep learning models trained on local data centers might not reach their total performance capacity. One solution could be to accumulate all data from different centers into one center. However, data privacy regulations do not allow medical institutions to easily combine their data, and this becomes increasingly difficult when institutions from multiple countries are involved. Another solution is to use privacy-preserving algorithms, which can make use of all the data available in multiple centers while keeping the sensitive data private. Federated learning (FL) is such a mechanism that enables deploying large-scale machine learning models trained on different data centers without sharing sensitive data. In FL, instead of transferring data, a general model is trained on local data sets and transferred between data centers. FL has been identified as a promising field of research, with extensive possible uses in medical research and practice. This article introduces FL, with a comprehensive look into its concepts and recent research trends in medical imaging., (Copyright © 2022 American College of Radiology. Published by Elsevier Inc. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
38. 2D Gait Skeleton Data Normalization for Quantitative Assessment of Movement Disorders from Freehand Single Camera Video Recordings.
- Author
-
Tang W, van Ooijen PMA, Sival DA, and Maurits NM
- Subjects
- Ataxia, Child, Humans, Movement, Skeleton, Video Recording, Gait, Movement Disorders diagnosis
- Abstract
Overlapping phenotypic features between Early Onset Ataxia (EOA) and Developmental Coordination Disorder (DCD) can complicate the clinical distinction of these disorders. Clinical rating scales are a common way to quantify movement disorders but in children these scales also rely on the observer's assessment and interpretation. Despite the introduction of inertial measurement units for objective and more precise evaluation, special hardware is still required, restricting their widespread application. Gait video recordings of movement disorder patients are frequently captured in routine clinical settings, but there is presently no suitable quantitative analysis method for these recordings. Owing to advancements in computer vision technology, deep learning pose estimation techniques may soon be ready for convenient and low-cost clinical usage. This study presents a framework based on 2D video recording in the coronal plane and pose estimation for the quantitative assessment of gait in movement disorders. To allow the calculation of distance-based features, seven different methods to normalize 2D skeleton keypoint data derived from pose estimation using deep neural networks applied to freehand video recording of gait were evaluated. In our experiments, 15 children (five EOA, five DCD and five healthy controls) were asked to walk naturally while being videotaped by a single camera in 1280 × 720 resolution at 25 frames per second. The high likelihood of the prediction of keypoint locations (mean = 0.889, standard deviation = 0.02) demonstrates the potential for distance-based features derived from routine video recordings to assist in the clinical evaluation of movement in EOA and DCD. By comparison of mean absolute angle error and mean variance of distance, the normalization methods using the Euclidean (2D) distance of left shoulder and right hip, or the average distance from left shoulder to right hip and from right shoulder to left hip were found to better perform for deriving distance-based features and further quantitative assessment of movement disorders.
- Published
- 2022
- Full Text
- View/download PDF
39. Computer 3D modeling of radiofrequency ablation of atypical cartilaginous tumours in long bones using finite element methods and real patient anatomy.
- Author
-
Rivas Loya R, Jutte PC, Kwee TC, and van Ooijen PMA
- Subjects
- Computer Simulation, Computers, Finite Element Analysis, Humans, Prospective Studies, Retrospective Studies, Bone Neoplasms diagnostic imaging, Bone Neoplasms surgery, Radiofrequency Ablation
- Abstract
Background: Radiofrequency ablation (RFA) is a minimally invasive technique used for the treatment of neoplasms, with a growing interest in the treatment of bone tumours. However, the lack of data concerning the size of the resulting ablation zones in RFA of bone tumours makes prospective planning challenging, needed for safe and effective treatment., Methods: Using retrospective computed tomography and magnetic resonance imaging data from patients treated with RFA of atypical cartilaginous tumours (ACTs), the bone, tumours, and final position of the RFA electrode were segmented from the medical images and used in finite element models to simulate RFA. Tissue parameters were optimised, and boundary conditions were defined to mimic the clinical scenario. The resulting ablation diameters from postoperative images were then measured and compared to the ones from the simulations, and the error between them was calculated., Results: Seven cases had all the information required to create the finite element models. The resulting median error (in all three directions) was -1 mm, with interquartile ranges from -3 to 3 mm. The three-dimensional models showed that the thermal damage concentrates close to the cortical wall in the first minutes and then becomes more evenly distributed., Conclusions: Computer simulations can predict the ablation diameters with acceptable accuracy and may thus be utilised for patient planning. This could allow interventional radiologists to accurately define the time, electrode length, and position required to treat ACTs with RFA and make adjustments as needed to guarantee total tumour destruction while sparing as much healthy tissue as possible., (© 2022. The Author(s) under exclusive licence to European Society of Radiology.)
- Published
- 2022
- Full Text
- View/download PDF
40. Coronary calcium scoring as first-line test to detect and exclude coronary artery disease in patients presenting to the general practitioner with stable chest pain: protocol of the cluster-randomised CONCRETE trial.
- Author
-
Koopman MY, Reijnders JJW, Willemsen RTA, van Bruggen R, Doggen CJM, Kietselaer B, Oude Wolcherink MJ, van Ooijen PMA, Gratama JWC, Braam R, Oudkerk M, van der Harst P, Dinant GJ, and Vliegenthart R
- Subjects
- Angina Pectoris complications, Angina Pectoris diagnosis, Calcium, Chest Pain diagnosis, Chest Pain etiology, Coronary Angiography methods, Humans, Multicenter Studies as Topic, Pragmatic Clinical Trials as Topic, Predictive Value of Tests, Quality of Life, Randomized Controlled Trials as Topic, Coronary Artery Disease complications, Coronary Artery Disease diagnosis, General Practitioners
- Abstract
Introduction: Identifying and excluding coronary artery disease (CAD) in patients with atypical angina pectoris (AP) and non-specific thoracic complaints is a challenge for general practitioners (GPs). A diagnostic and prognostic tool could help GPs in determining the likelihood of CAD and guide patient management. Studies in outpatient settings have shown that the CT-based coronary calcium score (CCS) has high accuracy for diagnosis and exclusion of CAD. However, the CT CCS test has not been tested in a primary care setting. In the COroNary Calcium scoring as fiRst-linE Test to dEtect and exclude coronary artery disease in GPs patients with stable chest pain (CONCRETE) study, the impact of direct access of GPs to CT CCS will be investigated. We hypothesise that this will allow for early diagnosis of CAD and treatment, more efficient referral to the cardiologist and a reduction of healthcare-related costs., Methods and Analysis: CONCRETE is a pragmatic multicentre trial with a cluster randomised design, in which direct GP access to the CT CCS test is compared with standard of care. In both arms, at least 40 GP offices, and circa 800 patients with atypical AP and non-specific thoracic complaints will be included. To determine the increase in detection and treatment rate of CAD in GP offices, the CVRM registration rate is derived from the GPs electronic registration system. Individual patients' data regarding cardiovascular risk factors, expressed chest pain complaints, quality of life, downstream testing and CAD diagnosis will be collected through questionnaires and the electronic GP dossier., Ethics and Dissemination: CONCRETE has been approved by the Medical Ethical Committee of the University Medical Center of Groningen., Trial Registration Number: NTR 7475; Pre-results., Competing Interests: Competing interests: General practitioners (GPs) in the control condition receive a €50 compensation for the inclusion of five patients, to compensate for the time investment to include patients into the study. We expect that this financial compensation will not lead GPs to include patients due to a financial incentive. RV is supported by an institutional research grant from Siemens Healthineers. The performance of the trial and trial results do not result in a conflict of interest of the authors as there are no other competing interests., (© Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.)
- Published
- 2022
- Full Text
- View/download PDF
41. Qualitative Evaluation of Common Quantitative Metrics for Clinical Acceptance of Automatic Segmentation: a Case Study on Heart Contouring from CT Images by Deep Learning Algorithms.
- Author
-
van den Oever LB, van Veldhuizen WA, Cornelissen LJ, Spoor DS, Willems TP, Kramer G, Stigter T, Rook M, Crijns APG, Oudkerk M, Veldhuis RNJ, de Bock GH, and van Ooijen PMA
- Subjects
- Algorithms, Benchmarking, Humans, Organs at Risk, Tomography, X-Ray Computed methods, Deep Learning
- Abstract
Organs-at-risk contouring is time consuming and labour intensive. Automation by deep learning algorithms would decrease the workload of radiotherapists and technicians considerably. However, the variety of metrics used for the evaluation of deep learning algorithms make the results of many papers difficult to interpret and compare. In this paper, a qualitative evaluation is done on five established metrics to assess whether their values correlate with clinical usability. A total of 377 CT volumes with heart delineations were randomly selected for training and evaluation. A deep learning algorithm was used to predict the contours of the heart. A total of 101 CT slices from the validation set with the predicted contours were shown to three experienced radiologists. They examined each slice independently whether they would accept or adjust the prediction and if there were (small) mistakes. For each slice, the scores of this qualitative evaluation were then compared with the Sørensen-Dice coefficient (DC), the Hausdorff distance (HD), pixel-wise accuracy, sensitivity and precision. The statistical analysis of the qualitative evaluation and metrics showed a significant correlation. Of the slices with a DC over 0.96 (N = 20) or a 95% HD under 5 voxels (N = 25), no slices were rejected by the readers. Contours with lower DC or higher HD were seen in both rejected and accepted contours. Qualitative evaluation shows that it is difficult to use common quantification metrics as indicator for use in clinic. We might need to change the reporting of quantitative metrics to better reflect clinical acceptance., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
42. Automatic Cardiac Structure Contouring for Small Datasets with Cascaded Deep Learning Models.
- Author
-
van den Oever LB, Spoor DS, Crijns APG, Vliegenthart R, Oudkerk M, Veldhuis RNJ, de Bock GH, and van Ooijen PMA
- Subjects
- Algorithms, Heart diagnostic imaging, Heart Ventricles diagnostic imaging, Humans, Retrospective Studies, Deep Learning
- Abstract
Cardiac structure contouring is a time consuming and tedious manual activity used for radiotherapeutic dose toxicity planning. We developed an automatic cardiac structure segmentation pipeline for use in low-dose non-contrast planning CT based on deep learning algorithms for small datasets. Fifty CT scans were retrospectively selected and the whole heart, ventricles and atria were contoured. A two stage deep learning pipeline was trained on 41 non contrast planning CTs, tuned with 3 CT scans and validated on 6 CT scans. In the first stage, An InceptionResNetV2 network was used to identify the slices that contained cardiac structures. The second stage consisted of three deep learning models trained on the images containing cardiac structures to segment the structures. The three deep learning models predicted the segmentations/contours on axial, coronal and sagittal images and are combined to create the final prediction. The final accuracy of the pipeline was quantified on 6 volumes by calculating the Dice similarity coefficient (DC), 95% Hausdorff distance (95% HD) and volume ratios between predicted and ground truth volumes. Median DC and 95% HD of 0.96, 0.88, 0.92, 0.80 and 0.82, and 1.86, 2.98, 2.02, 6.16 and 6.46 were achieved for the whole heart, right and left ventricle, and right and left atria respectively. The median differences in volume were -4, -1, + 5, -16 and -20% for the whole heart, right and left ventricle, and right and left atria respectively. The automatic contouring pipeline achieves good results for whole heart and ventricles. Robust automatic contouring with deep learning methods seems viable for local centers with small datasets., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
43. The Diagnostic and Prognostic Value of Coronary Calcium Scoring in Stable Chest Pain Patients: A Narrative Review.
- Author
-
Koopman MY, Willemsen RTA, van der Harst P, van Bruggen R, Gratama JWC, Braam R, van Ooijen PMA, Doggen CJM, Dinant GJ, Kietselaer B, and Vliegenthart R
- Subjects
- Chest Pain diagnostic imaging, Chest Pain epidemiology, Computed Tomography Angiography methods, Coronary Angiography methods, Female, Humans, Predictive Value of Tests, Prognosis, Risk Assessment, Risk Factors, Calcium, Coronary Artery Disease diagnostic imaging
- Abstract
Background: Non-contrast computed tomography (CT) scanning allows for reliable coronary calcium score (CCS) calculation at a low radiation dose and has been well established as marker to assess the future risk of coronary artery disease (CAD) events in asymptomatic individuals. However, the diagnostic and prognostic value in symptomatic patients remains a matter of debate. This narrative review focuses on the available evidence for CCS in patients with stable chest pain complaints., Method: PubMed, Embase, and Web of Science were searched for literature using search terms related to three overarching categories: CT, symptomatic chest pain patients, and coronary calcium. The search resulted in 42 articles fulfilling the inclusion and exclusion criteria: 27 articles (n = 38 137 patients) focused on diagnostic value and 23 articles (n = 44 683 patients) on prognostic value of CCS. Of these, 10 articles (n = 21 208 patients) focused on both the diagnostic and prognostic value of CCS., Results: Between 22 and 10 037 patients were included in the studies on the diagnostic and prognostic value of CCS, including 43 % and 51 % patients with CCS 0. The most evidence is available for patients with a low and intermediate pre-test probability (PTP) of CAD. Overall, the prevalence of obstructive CAD (OCAD, defined as a luminal stenosis of ≥ 50 % in any of the coronary arteries) as determined with CT coronary angiography in CCS 0 patients, was 4.4 % (n = 703/16 074) with a range of 0-26 % in individual studies. The event rate for major adverse cardiac events (MACE) ranged from 0 % to 2.1 % during a follow-up of 1.6 to 6.8 years, resulting in a high negative predictive value for MACE between 98 % and 100 % in CCS 0 patients. At increasing CCS, the OCAD probability and MACE risk increased. OCAD was present in 58.3 % (n = 617/1058) of CCS > 400 patients with percentages ranging from 20 % to 94 % and MACE occurred in 16.7 % (n = 175/1048) of these patients with percentages ranging from 6.9 % to 50 %., Conclusion: Accumulating evidence shows that OCAD is unlikely and the MACE risk is very low in symptomatic patients with CCS 0, especially in those with low and intermediate PTPs. This suggests a role of CCS as a gatekeeper for additional diagnostic testing. Increasing CCS is related to an increasing probability of OCAD and risk of cardiac events. Additional research is needed to assess the value of CCS in women and patient management in a primary healthcare setting., Key Points: · A CCS of zero makes OCAD in patients at low-intermediate PTP unlikely. · A CCS of zero is related to a very low risk of MACE. · Categories of increasing CCS are related to increasing rates of OCAD and MACE. · Future studies should focus on the diagnostic and prognostic value of CCS in symptomatic women and the role in primary care., Citation Format: · Koopman MY, Willemsen RT, van der Harst P et al. The Diagnostic and Prognostic Value of Coronary Calcium Scoring in Stable Chest Pain Patients: A Narrative Review. Fortschr Röntgenstr 2022; 194: 257 - 265., Competing Interests: The authors declare that they have no conflict of interest., (The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commecial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).)
- Published
- 2022
- Full Text
- View/download PDF
44. Structural similarity analysis of midfacial fractures-a feasibility study.
- Author
-
Rozema R, Kruitbosch HT, van Minnen B, Dorgelo B, Kraeima J, and van Ooijen PMA
- Abstract
The structural similarity index metric is used to measure the similarity between two images. The aim here was to study the feasibility of this metric to measure the structural similarity and fracture characteristics of midfacial fractures in computed tomography (CT) datasets following radiation dose reduction, iterative reconstruction (IR) and deep learning reconstruction. Zygomaticomaxillary fractures were inflicted on four human cadaver specimen and scanned with standard and low dose CT protocols. Datasets were reconstructed using varying strengths of IR and the subsequently applying the PixelShine™ deep learning algorithm as post processing. Individual small and non-dislocated fractures were selected for the data analysis. After attenuating the osseous anatomy of interest, registration was performed to superimpose the datasets and subsequently to measure by structural image quality. Changes to the fracture characteristics were measured by comparing each fracture to the mirrored contralateral anatomy. Twelve fracture locations were included in the data analysis. The most structural image quality changes occurred with radiation dose reduction (0.980036±0.011904), whilst the effects of IR strength (0.995399±0.001059) and the deep learning algorithm (0.999996±0.000002) were small. Radiation dose reduction and IR strength tended to affect the fracture characteristics. Both the structural image quality and fracture characteristics were not affected by the use of the deep learning algorithm. In conclusion, evidence is provided for the feasibility of using the structural similarity index metric for the analysis of structural image quality and fracture characteristics., Competing Interests: Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://dx.doi.org/10.21037/qims-21-564). The authors have no conflicts of interest to declare., (2022 Quantitative Imaging in Medicine and Surgery. All rights reserved.)
- Published
- 2022
- Full Text
- View/download PDF
45. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program.
- Author
-
Cui X, Zheng S, Heuvelmans MA, Du Y, Sidorenkov G, Fan S, Li Y, Xie Y, Zhu Z, Dorrius MD, Zhao Y, Veldhuis RNJ, de Bock GH, Oudkerk M, van Ooijen PMA, Vliegenthart R, and Ye Z
- Subjects
- China epidemiology, Early Detection of Cancer, Humans, Lung diagnostic imaging, Radiographic Image Interpretation, Computer-Assisted, Reproducibility of Results, Sensitivity and Specificity, Tomography, X-Ray Computed, Deep Learning, Lung Neoplasms diagnostic imaging, Solitary Pulmonary Nodule diagnostic imaging
- Abstract
Objective: To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program., Materials and Methods: One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS., Results: The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001)., Conclusions: The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening., (Copyright © 2021. Published by Elsevier B.V.)
- Published
- 2022
- Full Text
- View/download PDF
46. Focal pericoronary adipose tissue attenuation is related to plaque presence, plaque type, and stenosis severity in coronary CTA.
- Author
-
Ma R, van Assen M, Ties D, Pelgrim GJ, van Dijk R, Sidorenkov G, van Ooijen PMA, van der Harst P, and Vliegenthart R
- Subjects
- Adipose Tissue diagnostic imaging, Computed Tomography Angiography, Constriction, Pathologic, Coronary Angiography, Coronary Vessels diagnostic imaging, Humans, Predictive Value of Tests, Retrospective Studies, Coronary Artery Disease diagnostic imaging, Coronary Stenosis diagnostic imaging, Plaque, Atherosclerotic diagnostic imaging
- Abstract
Objectives: To investigate the association of pericoronary adipose tissue mean attenuation (PCAT
MA ) with coronary artery disease (CAD) characteristics on coronary computed tomography angiography (CCTA)., Methods: We retrospectively investigated 165 symptomatic patients who underwent third-generation dual-source CCTA at 70kVp: 93 with and 72 without CAD (204 arteries with plaque, 291 without plaque). CCTA was evaluated for presence and characteristics of CAD per artery. PCATMA was measured proximally and across the most severe stenosis. Patient-level, proximal PCATMA was defined as the mean of the proximal PCATMA of the three main coronary arteries. Analyses were performed on patient and vessel level., Results: Mean proximal PCATMA was -96.2 ± 7.1 HU and -95.6 ± 7.8HU for patients with and without CAD (p = 0.644). In arteries with plaque, proximal and lesion-specific PCATMA was similar (-96.1 ± 9.6 HU, -95.9 ± 11.2 HU, p = 0.608). Lesion-specific PCATMA of arteries with plaque (-94.7 HU) differed from proximal PCATMA of arteries without plaque (-97.2 HU, p = 0.015). Minimal stenosis showed higher lesion-specific PCATMA (-94.0 HU) than severe stenosis (-98.5 HU, p = 0.030). Lesion-specific PCATMA of non-calcified, mixed, and calcified plaque was -96.5 HU, -94.6 HU, and -89.9 HU (p = 0.004). Vessel-based total plaque, lipid-rich necrotic core, and calcified plaque burden showed a very weak to moderate correlation with proximal PCATMA ., Conclusions: Lesion-specific PCATMA was higher in arteries with plaque than proximal PCATMA in arteries without plaque. Lesion-specific PCATMA was higher in non-calcified and mixed plaques compared to calcified plaques, and in minimal stenosis compared to severe; proximal PCATMA did not show these relationships. This suggests that lesion-specific PCATMA is related to plaque development and vulnerability., Key Points: • In symptomatic patients undergoing CCTA at 70 kVp, PCATMA was higher in coronary arteries with plaque than those without plaque. • PCATMA was higher for non-calcified and mixed plaques compared to calcified plaques, and for minimal stenosis compared to severe stenosis. • In contrast to PCATMA measurement of the proximal vessels, lesion-specific PCATMA showed clear relationships with plaque presence and stenosis degree., (© 2021. The Author(s).)- Published
- 2021
- Full Text
- View/download PDF
47. Deep Learning-Based Natural Language Processing in Radiology: The Impact of Report Complexity, Disease Prevalence, Dataset Size, and Algorithm Type on Model Performance.
- Author
-
Olthof AW, van Ooijen PMA, and Cornelissen LJ
- Subjects
- Algorithms, Humans, Natural Language Processing, Prevalence, Deep Learning, Radiology
- Abstract
In radiology, natural language processing (NLP) allows the extraction of valuable information from radiology reports. It can be used for various downstream tasks such as quality improvement, epidemiological research, and monitoring guideline adherence. Class imbalance, variation in dataset size, variation in report complexity, and algorithm type all influence NLP performance but have not yet been systematically and interrelatedly evaluated. In this study, we investigate these factors on the performance of four types [a fully connected neural network (Dense), a long short-term memory recurrent neural network (LSTM), a convolutional neural network (CNN), and a Bidirectional Encoder Representations from Transformers (BERT)] of deep learning-based NLP. Two datasets consisting of radiologist-annotated reports of both trauma radiographs (n = 2469) and chest radiographs and computer tomography (CT) studies (n = 2255) were split into training sets (80%) and testing sets (20%). The training data was used as a source to train all four model types in 84 experiments (Fracture-data) and 45 experiments (Chest-data) with variation in size and prevalence. The performance was evaluated on sensitivity, specificity, positive predictive value, negative predictive value, area under the curve, and F score. After the NLP of radiology reports, all four model-architectures demonstrated high performance with metrics up to > 0.90. CNN, LSTM, and Dense were outperformed by the BERT algorithm because of its stable results despite variation in training size and prevalence. Awareness of variation in prevalence is warranted because it impacts sensitivity and specificity in opposite directions., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
48. Effects of control temperature, ablation time, and background tissue in radiofrequency ablation of osteoid osteoma: A computer modeling study.
- Author
-
Rivas R, Hijlkema RB, Cornelissen LJ, Kwee TC, Jutte PC, and van Ooijen PMA
- Subjects
- Computers, Humans, Temperature, Treatment Outcome, Bone Neoplasms surgery, Catheter Ablation, Osteoma, Osteoid surgery, Radiofrequency Ablation
- Abstract
To study the effects of the control temperature, ablation time, and the background tissue surrounding the tumor on the size of the ablation zone on radiofrequency ablation (RFA) of osteoid osteoma (OO). Finite element models of non-cooled temperature-controlled RFA of typical OOs were developed to determine the resulting ablation radius at control temperatures of 70, 80, and 90°C. Three different geometries were used, mimicking common cases of OO. The ablation radius was obtained by using the Arrhenius equation to determine cell viability. Ablation radii were larger for higher temperatures and also increased with time. All geometries and control temperatures tested had ablation radii larger than the tumor. The ablation radius developed rapidly in the first few minutes for all geometries and control temperatures tested, developing slowly towards the end of the ablation. Resistive heating and the temperature distribution showed differences depending on background tissue properties, resulting in differences in the ablation radius on each geometry. The ablation radius has a clear dependency not only on the properties of the tumor but also on the background tissue. Lower background tissue's electrical conductivity and blood perfusion rates seem to result in larger ablation zones. The differences observed between the different geometries suggest the need for patient-specific planning, as the anatomical variations could cause significantly different outcomes where models like the one here presented could help to guarantee safe and successful tumor ablations., (© 2021 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.)
- Published
- 2021
- Full Text
- View/download PDF
49. Machine learning based natural language processing of radiology reports in orthopaedic trauma.
- Author
-
Olthof AW, Shouche P, Fennema EM, IJpma FFA, Koolstra RHC, Stirler VMA, van Ooijen PMA, and Cornelissen LJ
- Subjects
- Humans, Machine Learning, Natural Language Processing, Radiography, Orthopedics, Radiology
- Abstract
Objectives: To compare different Machine Learning (ML) Natural Language Processing (NLP) methods to classify radiology reports in orthopaedic trauma for the presence of injuries. Assessing NLP performance is a prerequisite for downstream tasks and therefore of importance from a clinical perspective (avoiding missed injuries, quality check, insight in diagnostic yield) as well as from a research perspective (identification of patient cohorts, annotation of radiographs)., Methods: Datasets of Dutch radiology reports of injured extremities (n = 2469, 33% fractures) and chest radiographs (n = 799, 20% pneumothorax) were collected in two different hospitals and labeled by radiologists and trauma surgeons for the presence or absence of injuries. NLP classification was applied and optimized by testing different preprocessing steps and different classifiers (Rule-based, ML, and Bidirectional Encoder Representations from Transformers (BERT)). Performance was assessed by F1-score, AUC, sensitivity, specificity and accuracy., Results: The deep learning based BERT model outperforms all other classification methods which were assessed. The model achieved an F1-score of (95 ± 2)% and accuracy of (96 ± 1)% on a dataset of simple reports (n= 2469), and an F1 of (83 ± 7)% with accuracy (93 ± 2)% on a dataset of complex reports (n= 799)., Conclusion: BERT NLP outperforms traditional ML and rule-base classifiers when applied to Dutch radiology reports in orthopaedic trauma., Competing Interests: Declaration of Competing Interest The authors of this manuscript declare no relationships with any companies, whose products or services may be related to the subject matter of the article, (Copyright © 2021. Published by Elsevier B.V.)
- Published
- 2021
- Full Text
- View/download PDF
50. Augmented Reality Visualization for Image-Guided Surgery: A Validation Study Using a Three-Dimensional Printed Phantom.
- Author
-
Glas HH, Kraeima J, van Ooijen PMA, Spijkervet FKL, Yu L, and Witjes MJH
- Subjects
- Humans, Imaging, Three-Dimensional, Operating Rooms, Workflow, Augmented Reality, Surgery, Computer-Assisted, Surgery, Oral
- Abstract
Background: Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MRI). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems., Purpose: This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared., Results: Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5)., Conclusion: The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup.?>., (Copyright © 2021 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.